settings, risk assessment should be conducted weekly for the first 4 weeks with every other week reassessments thereafter depending on patient condition and frequency of home visits. Nursing home residents should be reassessed for PI risk status weekly for the first 4 weeks following admission followed by quarterly assessments. Whenever the patient’s condition changes in that he is more immobile or eating less, PI risk should be recognized.
PREVENTION
PI prevention involves scheduled turning and repositioning programs, use of support surfaces to reduce/relieve pressure, nutritional support, and general skin care. Prevention interventions should be implemented for persons at risk for PI development and those with existing PI as part of the treatment plan.
Table 46-1 presents general prevention interventions directed at risk factors for PI development.
TABLE 46-1 ■ PRESSURE INJURY PREVENTION INTERVENTIONS
Turning and Repositioning
Patients at risk for PI who are unable to move independently should be placed on scheduled repositioning programs. The recommended time interval for full change of position or turning while in bed is every 2 to 3 hours, depending on the individual patient profile and the use of support surfaces.
An old practice of turning patients every 2 hours has long lacked data to support this intervention. Some patients turn themselves relatively frequently, and turning and repositioning may be an unnecessary use of staff time and sleep disruptive at night. Turning activity can be detected by bed sensors, actigraphy, and other pressure mapping strategies (see below). And with the use of improved pressure-reducing support surfaces (eg, foam, air, gel-filled mattress overlays, low-air-loss therapy devices), frequency often need not be every 2 hours. A randomized clinical trial of PI incidence and turning schedules was conducted on 942 nursing home residents at moderate or high risk for PI development. Residents were placed on high-density foam mattresses and randomly assigned to turning schedules of every 2, 3, or 4 hours. There was no significant difference in PI incidence between the turning groups; 2.5% of those turned every 2 hours, 0.6% of those turned every 3 hours, and 3.1% of those patients turned every 4 hours developed PI. There also was no difference in PI rates between the high- and moderate-risk groups of residents.
The patient should be positioned to avoid direct pressure on the sacrum and heels. Position the patient at a 30-degree side-lying position (eg, 30- degree angle to the support surface) with the upper leg forward of the lower leg to reduce the tendency of the patient to fall back onto the sacrum. The legs should be separated with a pillow to reduce pressure. Maintain the head of the bed at the lowest degree of elevation consistent with medical conditions and limit the amount of time the head of the bed is elevated. This position will decrease exposure of the sacral area to shear forces that may predispose to PI in general and especially DTPI. Patients with arterial disease should have their heels off the bed. If the patient will remain positioned on a pillow that “floats the heel”, that is all that is needed. However, many patients kick the pillows away or are at extremely high risk—for those patient heel offloading boots or “donuts” made of foam placed above the ankles that elevate the heel are used.
There are techniques to make turning patients easier and less time consuming. Turning sheets, draw sheets, and pillows are essential for passive movement of patients in bed. Turning sheets and overlays are useful in repositioning the patient to a side-lying position, and for pulling the patient up in bed and help to prevent dragging the patient’s skin over the bed surface.
Use of real-time pressure mapping systems has been used to improve frequency of repositioning activities by caregivers. Pressure mapping
systems may have a positive impact on PI development because the duration and amount of pressure over specific bony prominences are displayed for caregivers allowing for more accurate offloading of bony prominences.
Similar approaches are useful for patients in chairs. Individuals at risk for PI development should avoid uninterrupted sitting in chairs and should be repositioned every hour. The rationale behind the shorter time is the extremely high pressure generated on the ischial tuberosities when sitting erect in the chair and sacrum when reclined or slouched in the seated position. When possible, individuals who are able should be taught to shift weight every 15 minutes while seated. Full-body change of position involves standing the patient and reseating them in the chair. This process can be labeled “Stand and March in Place” and reduces the usual process of manually pulling patients up in the chair, a process which greatly increases back and shoulder injury to staff and shear for the patient. Use of footstools and the foot pedals on wheelchairs and appropriate 90-degree flexion of the hip (may be achieved with pillows, special seat cushions, or orthotic devices) can help prevent chair sliding. Attention to proper alignment and posture is essential. Patients who mobilize by wheelchair benefit from “tilt- in-space” chairs that recline the patient by tilting the backrest back.
Repositioning for preventing PI at the heel location involves completely offloading the heel using suspension devices, foam donuts (mentioned above), or pillows. The goal is to keep the heels free of all pressure or “float” the heels. Elevating the lower leg and calf with pillows or suspension devices spreads the pressure to the lower legs and the heel is no longer subjected to pressure. Heel suspension devices are preferable for long-term use over pillows as it can be difficult for patients to keep their legs on pillows over longer time frames.
Support Surfaces
Most institutions that care for patients or residents use a 4-inch viscoelastic foam mattress on the beds. This mattress is adequate for the patient or resident who can self-turn and move the legs. For patients at higher risk, reactive or active support surfaces should be used. Some mattresses are replaced, sometimes an overlay can be placed on the usual mattress, and sometimes the entire bed system is replaced. These decisions stem from the relative risk for PI development. The higher the risk, the more aggressive the support surface needs to be. How frequently a patient needs to be
repositioned by the staff when lying on a specialty support surface has not been fully studied.
The definitions of types of support surfaces stem from the support surface initiative by the National Pressure Injury Advisory Panel. The premise of a support surface is that if pressure can be redistributed over a larger body area, the relative pressure at any one area will be less. Think of the analogy of pushing your hand onto a surface of many nails versus one nail. Reactive support surfaces are powered or nonpowered surfaces with the ability to only change load distribution properties in response to an applied load.
Reactive surfaces reduce pressure by immersion and envelopment of the body into the surface to reduce the deformation of tissue caused by pressure over the bony prominence. A foam mattress is an example of a nonpowered reactive surface. Active support surfaces are powered surfaces that inflate and deflate cells of air to change pressure over a body area. Active surfaces reduce pressure by periodically shifting the areas of support on anatomic locations so that deformation is not sustained over one area. In general, active support surfaces are recommended for persons at higher risk for PI development when frequent repositioning is not possible. An alternating air mattress is an example of an active powered support surface.
Additionally, clinicians should advocate for use of support surfaces in the operating room to reduce intraoperative acquired PI. High-risk patients for PI development during surgery are those whose operation will exceed 3 hours, those with American Society of Anesthesiologists (ASA) scores of 3 or higher, and those who are prone for surgery. The areas of the body exposed to pressure during surgery vary by the position used for the operation. Staff familiar with the position required should place prophylactic dressings on areas of high risk or pad the table to protect those body areas.
Providing topical preparations or fabrics/linens (silk or noncotton blends) to eliminate or reduce the surface tension between the skin and the bed linen or support surface will assist in reducing friction-related injury. Use of appropriate techniques when moving patients so that skin is not dragged across linens will lessen friction-induced skin breakdown. Patients who exhibit voluntary or involuntary repetitive body movements (particularly of the heels or elbows) require stronger interventions. Use of a protective film, such as a transparent film dressing or a skin sealant; a protective dressing, such as a thin hydrocolloid; or protective padding will help to eliminate the surface contact of the area and decrease the friction between the
skin and the linens. Even though heel, ankle, and elbow protectors do nothing to reduce or relieve pressure, they can be effective aids against friction. Hip fracture patients are especially vulnerable to heel injuries. Elevation of the heel off the bed surface is a useful preventive measure.
Use of prophylactic silicone foam dressings over bony prominences has demonstrated significantly decreased heel (3% vs 12% favoring dressing) and sacral (1% vs 5% favoring dressing) PI among critical care patients with the number needed to treat of 10 to prevent any injury. Similar outcomes exist in long-term care residents showing reduction in PI rates even in the incontinent resident. Use of prophylactic silicone foam, hydrocolloid, foam, or silicone gel dressings around medical devices has also been shown to decrease PI development related to medical devices including tracheostomy tubes, nasal intubation tubes, and nasal continuous positive airway pressure (CPAP) devices.
General skin care should include routine skin inspection, incontinence assessment and management, and skin hygiene interventions to maintain skin health. Routine skin inspection should occur daily with particular attention to bony prominences. Reddened areas should not be massaged. Massage can further impair the perfusion to the tissues. The skin should be evaluated for dryness and cracking and use of moisturizers can be helpful. Attention should also be focused on gentle handling to prevent skin tears. Incontinence assessment and management with scheduled toileting or prompted voiding programs and for those unresponsive to these programs check and change schedules are important. Prompt cleansing after incontinent episodes with warm water and gentle cleansers and use of protective ointments and creams help maintain perineal skin health.
Nutrition
Malnutrition and/or weight loss has been associated with fourfold higher risk of PI development. Other studies have demonstrated that the severity of the PI is associated with the severity of the malnutrition. Although it seems intuitive, it has proven difficult to define a specific causal relationship between malnutrition and PI development. Multiple studies have demonstrated a relationship between different markers of malnutrition (eg, dietary protein intake, inability to feed self, and weight loss) and PI formation. Modest evidence exists to support providing oral nutritional
supplements to persons at risk for PI with relative reduction in PI incidence of 25%.
Nutritionists should routinely assess patients and residents at risk for PI. A comprehensive nutritional assessment should be completed to provide information on adequacy of nutritional intake, anthropometric measures of current and usual body weight, height and body mass index (BMI), physical examination findings that highlight nutritional issues such as muscle wasting, edema, micronutrient deficiencies, and functional status. Nutritionists also consider the patient/resident’s ability to eat independently. For many years, serum albumin and prealbumin were considered biochemical markers of malnutrition. However, these laboratory values should not be the sole basis of a malnutrition or failure to thrive diagnosis because serum protein levels reflect severity of inflammatory response rather than nutritional status. Both intake of calories and protein often need to be optimized in patients at risk for PI. There are no studies that show PI rates to be lower in those persons given nutritional supplementation, but providing supplements represents good clinical practice. When dietary intake is inadequate, or deficiencies are suspected or confirmed, provide a vitamin and mineral supplement. Dietary restrictions should be revised or modified/liberalized when limitations result in decreased food and water/fluid intake. These adjustments should be managed by a registered dietitian whenever possible.
DIAGNOSIS OF PRESSURE INJURY
The diagnosis of a PI begins with a history of the patient’s exposure to pressure. Consider the events that preceded the onset of the wound, such as supine with the head of the bed elevated, sitting in a bedside chair or wheelchair, being unable to move the legs as the likely precipitant events to PI. Conditions that reduce the tolerance of skin and soft tissue for pressure and shear include reduced arterial perfusion, exposure to moisture, and protein calorie malnutrition. Differential diagnoses include arterial ulcers, diabetic foot ulcers, skin tears, and moisture-associated skin damage. Deep tissue PI is almost always preceded by exposure to intense pressure in the prior 48 to 72 hours. Differential diagnoses of deep tissue PI include traumatic injury, such as pelvic hematoma and embolic disease.
Detection of Early Pressure Injury
Detection of early pressure-induced tissue damage is important because early intervention may prevent evolution into more severe pressure damage. There are several nonvisual methods of detecting pressure damage currently being explored for use in clinical practice by wound care specialists: ultrasound, thermography, spectroscopy, and surface electrical capacitance. Of these, ultrasound, thermography, and surface electrical capacitance show promise for use clinically.
High-resolution ultrasound is one of the earliest noninvasive methods to visualizing skin and soft tissues that provides echogenic images of skin and deeper tissues. Early work showed tissue edema in nursing home residents at risk for PI. Nearly 80% of those with abnormal ultrasound images did not have documentation of erythema suggesting that ultrasound technology can detect tissue damage before clinical signs occur. Despite the advantages to ultrasound the equipment is large and expensive, requires skilled technicians to obtain useful images, and requires trained providers to interpret images.
Thermography, the measurement of skin surface temperature and temperatures of tissues below the skin surface, may provide a method for detecting nonvisual pressure damage and resultant ischemia or inflammation. Both increased and decreased skin temperature (compared to adjacent normal tissue) are associated with stage 1 PI in rehabilitation patients with pressure-induced erythema. Skin temperature variability also differentiates between nursing home residents at high and low risk for PI development and between those residents who do and do not develop PI. A study of nursing home residents using a thermographic camera found that areas of blanchable erythema with cooler skin temperatures were more likely to develop necrotic ulcers in 7 days.
Measurement of the subepidermal water content of the skin and underlying tissue can be accomplished using surface electrical capacitance devices. These devices detect and measure water or edema as the initial inflammatory response of injured tissues below the stratum corneum. When cells are injured, such as occurs with early pressure-induced tissue damage, cellular permeability increases and the action potential across the cell membrane is decreased allowing quick, high electrical charges to pass through the tissues. Devices to measure these charges are small, portable, handheld dermal phase meters, which require light skin touch with readings available within 3 to 8 seconds. In nursing home residents, critical care patients, veterans with spinal cord injury, and persons with dark skin tones,
this technique has detected inflammatory changes in the tissues, identifying pre-stage 1, stage 1, and deep tissue PI on the sacrum and heels.
Early Presentation
PIs are labeled using a staging system: stages 1 to 4, unstageable, and deep tissue PI (see Figure 46-1).
Stage 4 | Full-thickness skin and tissue loss with exposed or directly palpable fascia, muscle, tendon, ligament, cartilage, or bone in the ulcer. Slough and/or eschar may be visible. Epibole (rolled edges), undermining, and/ortunneling often occur. Depth varies by anatomical location. lf slough or eschar obscures the extent of tissue loss,.this is an unstageable pressure injury. | Sacrum |
Unstageable | Full-thickness skin and tissue loss in which the e,xtent of tissue damage within the ulcer cannot be confirmed because it is obscured by slough or eschar. lf slough oז eschar is removed. a sta.ge 3 סr st.age 4 pressure injury will be revealed. Sוable eschar (ie, dוy, adherent, intact without erythema or fluctuance) סח an ischemic limb or the heel(s) should not be removed. | |
, | ||
, | ||
Deep,tissue pressure injury | lntact ornoninlact skin with localized area of persistent nonblanchable deep red, maroon, purple discoloralion or epidermaו separation revealing a da.rk wound bed or blood-filled blister. Paiח and tempera.ture cha.nge often precede skin color changes. Discoloratio,n may appear diflerently in darkly pigmented skin. This injury results from int.ense and/or prolonged pressure and shear forces at the bone-muscוe interface. The wound may evolve rapidly to reveal lhe actual extent of tissue injury סr may resolve without tissue וoss. lf necrotic tissue, subcutaneous tissue, granulation tissue, fascia, muscle, סr other under1ying structures are visible,1his indicates a full- 1hickחess pressure injury (unstageable, stage 3 or slage 4). Do noו use DT'PI to describe vascular, traumatic, neuropathic, or dermatologic conditions. | Right lateraf buttock |
FIGURE 46-1. Stages of pressure injury.
While there is no label for preclinical or pre-stage 1 PI, there are often signs of reperfusion of tissue after pressure is relieved. Blanchable erythema presents as discoloration of a patch or flat, nonraised area of the skin larger than 1 cm.
Assessment of Pressure Injury Stage
PIs are commonly classified using a staging or categorical system based on the observable depth of tissue loss. The stage is determined on initial assessment by noting the deepest layer of tissue involved. The injury is not restaged unless deeper layers of tissue become exposed. The numeric classification system suggested an orderly evolution of PI; however, PI do not progress, heal, or deteriorate in a linear fashion.
The most used staging system is the National Pressure Injury Advisory Panel’s (NPIAP) system describing six classifications of PI. The NPIAP staging system was updated in 2016. Figure 46-1 presents the definitions for PI stages. When more than one tissue type is present in a wound, stage the wound to the highest level of damage because this will dictate the treatment needed.
Stage 1 Intact skin with nonblanchable erythema involves more severe damage to underlying tissues including lymphatic and capillary occlusion. Skin temperature is cool compared with healthy tissues, and the area may feel indurated. This stage of tissue injury is also reversible, although tissues may take 1 to 3 weeks to return to normal.
Stage 2 An open wound with exposure of the dermis. The wound is superficial, with indistinct margins. Fluid-filled blisters (open or closed) may be seen. If properly treated, these wounds should heal in 2 to 4 weeks. Stage 2 PI often develops as a result of friction (rubbing heels on the bed) or chronic exposure to moisture, which reduces the tolerance of skin for pressure.
The remaining four stages of PI are full-thickness wounds; that is, exposure of body structures beneath the skin. These wounds are often the outcome of deep tissue PI.
Deep tissue PI Initially appears as purple or maroon intact skin over an area of the body exposed to intense pressure about 48 hours prior. The initial injury
was deformation and destruction of muscle cells, but due to the low metabolic demand of skin the skin stays alive for a time after the muscle beneath it is destroyed. The skin lyses off, creating a blistered look to the wound and then the damage becomes apparent. Eschar usually forms and surgical debridement is often needed.
Stage 3 Full-thickness loss of skin with exposure of adipose tissue. Because of the location of PI over bony prominences, this stage is not common.
However, as wounds heal they appear to be stage 3 PI because they are filled with granulation tissue.
Stage 4 Full-thickness loss of skin with exposure of bone, muscle, tendon, ligament, or cartilage. Osteomyelitis is a common outcome when there is exposure of bone and should be considered the cause of nonhealing in any stage 4 PI.
Unstageable A PI that is most likely a full-thickness PI, but it cannot be determined if the ulcer is a stage 3 or 4 because the extent of the damage cannot be visualized due to slough or eschar in the wound bed.
Mucous membrane PI Medical devices lead to many PI. They tend to be made of hard plastic and often rest on muscle membrane (endotracheal tubes, nasogastric tubes, catheters). These PI cannot be staged using the PI staging system because the anatomy of mucous membrane is not the same as skin.
Skin failure/terminal ulcers Clinicians have long reported change in skin color usually on the sacrum near the time of death. At times, these changes are labeled skin failure to parallel the failure of other body organs, citing the idea that the skin is the largest organ of the body and therefore can also fail. However, the wounds are not large enough to lead to death on their own, such as would be seen with epidermolysis bullosa or Steven-Johnson syndrome. The condition needs a different name; the reader may see changes in the terminology in the future.
MANAGEMENT OF PRESSURE INJURY
Pressure Injury Assessment
The healing of a PI is routinely assessed, usually weekly, by measuring the size of the wound, the type of tissue in the wound, the condition of the periwound, and the type of drainage and signs of infection. Nurses usually complete this activity, and the direction of the wound indicates if the current
treatment needs changing. A good practice method is to reconsider the treatments in place if the wound does not show signs of healing in 2 weeks. Of course, if the wound is deteriorating, there is no benefit to waiting 2 weeks to change treatments. Signs of healing include the wound size decreasing, the exudate (drainage) from the wound decreasing, no signs of infection, and the reduction in the amount of necrotic or nonviable tissue with the development of granulation tissue. Final wound closure is measured by the presence of epithelium covering the wound. Improvement rates for stage 3 and 4 injuries are slower than stage 2 injuries with 75% of stage 2 wounds healing in 60 days, while only 17% of stage 3 or 4 injuries heal in the same time period. Tissue types seen in wounds are shown in Figure 46-2.
FIGURE 46-2. Tissue types in pressure injuries.
There are two research-based PI assessment tools for evaluating wound status and healing, NPIAP’s Pressure Ulcer Scale for Healing tool (PUSH) (Figure 46-3) and the Bates-Jensen Wound Assessment Tool (BWAT) (Figure 46-4). Clinical practice guidelines, expert panels, and federal nursing home guidelines recommend standardized assessment of PI, and many groups recommend use of a standardized tool for ongoing PI assessment.
FIGURE 46-3. National Pressure Injury Advisory Panel Pressure Ulcer Scale for Healing Tool. (© National Pressure Injury Advisory Panel.)
The PUSH tool incorporates surface area measurements, exudate amount, and surface appearance. The clinician measures the size of the wound, using length and width to calculate surface area and chooses the appropriate size category of 10 categories. Exudate is evaluated as none (0), light (1), moderate (2), and heavy (3). Tissue type is rated as closed (0), epithelial tissue (1), granulation tissue (2), slough (3), and necrotic tissue (4). Each of the three items is scored, then the three sub-scores can be summed for a total
score. The PUSH tool offers a quick assessment to predict healing outcomes, but assessment of additional wound characteristics may still be needed in order to develop a treatment plan for the PI.
The BWAT evaluates 13 wound characteristics using a five-point numerical rating scale and rates them from best (scored as 1) to worst (scored as 5) possible (see Figure 46-4). Characteristics include size, depth, edges, undermining or pockets, necrotic tissue type and amount, exudate type and amount, surrounding skin color, peripheral tissue edema and induration, granulation tissue, and epithelialization. Similar to the PUSH tool, once characteristics have been scored, they can be summed for a total score (range 13–65).
FIGURE 46-4. Bates-Jensen Wound Assessment Tool.
Redistribution of Pressure and Reduction in Shear
PIs occur due to exposure to pressure and shear; therefore, pressure must be reduced for the wound to heal. In general, the patient should not be positioned on the PI, but this is easy to say and sometimes difficult to do. If
the patient can tolerate or is unaware of being positioned off the wound, the plan of care should indicate which positions are to be used (eg, side to side positioning). However, the more common issue is that patients often developed a PI by lying or sitting in a preferred position and it will be difficult to convince them to change. If the PI is on the sacrum, side to side turning, with confirmation that the sacrum is free from pressure, is the ideal practice. If the patient refuses to stay off the wound, consider upscaling the mattress or adding an overlay for additional immersion into the bed. These devices do not replace turning, but the increased immersion reduces pressure on soft tissue.
When PI occurs on the ischium in chairbound patients, a specialized chair cushion with high immersion is needed. Again, time spent lying on the sides will reduce pressure on the ischium. However, ischial wounds are very slow to heal, and it is difficult to convince a patient to lie in bed for months.
Usually a compromise is best, for example, being up a few hours at a time in the chair and then back to bed on the left or right side.
PI on the heel should be managed with a heel off-loading device (HOLD); pillows should not be used. Pillows are unreliable, often collapse under the weight of the leg, or get kicked off the bed. If the leg cannot be placed in a HOLD, apply a pressure redistribution foam dressing to the heel.
Trochanteric PI occurs in patients with contractures, who lie on their sides. Pressure redistribution in these patients is difficult because the PI are often bilateral, and the patient cannot be positioned supine as the weight of the flexed legs pulls them back onto their sides. These patients require upscaled beds, side lying positioning at less than 90 degrees with wedges, and protection of the ankles and knees from the contracted legs.
Use of reactive support surfaces such as mattress overlays or active surfaces such as low-air-loss therapy for patients with stage 1 or 2 PI who cannot be positioned off the wound.
Healing full-thickness injury is aided by air-fluidized therapy beds or low-air-loss beds. One retrospective, multisite, comparison study showed faster healing of existing PI with air-fluidized therapy compared to both pressure reduction support surfaces and low-air-loss therapy (mean healing rate of 5.2 cm2/wk for the air-fluidized surface group compared to 1.5 cm2/wk for pressure reduction support surface group and 1.8 cm2/wk for low-air-loss therapy group).
Nutrition
Inadequate nutritional intake and undernutrition have been linked to the severity of PI and protracted healing. Providing 30 to 35 kcal/kg of calories and 1.3 to 1.5 g/kg of protein daily has been shown to significantly improve PI healing. The preferred method of feeding is orally and if nutritional requirements cannot be met orally, any dietary restrictions should be revised or modified/liberalized when they result in decreased food and water/fluid intake. Evidence on the efficacy of extra protein and energy via oral supplements is substantial. High energy, high protein supplements are beneficial, especially those with arginine, zinc, antioxidants, and micronutrients when provided for more than 4 weeks. However, providing tube feeding to persons with PI has not consistently achieved positive results. Individuals receiving enteral nutrition via a percutaneous endoscopic gastrostomy (PEG) or nasogastric tube often have significantly more major complications (eg, weight loss, aspiration pneumonia, recurrent tube displacement, and death) that were deemed to be related to the intervention compared to individuals receiving an oral diet. Tube feeding formulas also contribute to diarrhea, often contaminating open wounds. Free water will need to be added to the intake of patients on supplemental feeding.
Additional fluids may be needed for fever, prolonged vomiting, profuse sweating, diarrhea, and/or heavily exuding wounds.
No evidence exists for use of supplemental vitamins or minerals (eg, vitamin A, E, C, iron) in persons with PI with no coexisting specific vitamin/mineral deficiency to improve PI healing. A daily multivitamin and mineral supplement that provides recommended daily allowances of vitamins and minerals is recommended for persons with suspected nutritional deficiencies.
Clinically, one of the most challenging aspects of nutrition is helping the anorexic elder eat. Many elders struggle with poorly fitted dentures or missing teeth, comorbid conditions or medications that reduce appetite, depression, and lack of usual social environments for eating, and lack of preferred foods when institutionalized but the most common complaint is “I’m just not hungry!” Family bringing in favorite foods can sometimes help. The use of appetite stimulants may be helpful in some people. Megestrol acetate (Megace) has never been shown to be effective in older patients or nursing home residents and has steroidal side effects. With increasing availability of medical marijuana some clinicians and patients are using it as
a potential appetite stimulant. A nutritionist will be helpful to fully investigate the issues surrounding not eating.
When the patient is in palliative care or end of life/hospice care, the goals are to provide comfort and minimize symptoms. If providing supplemental nutrition assists in providing comfort to the individual and is mutually agreed upon by the individual, family caregivers, and health professional, then supplemental nutrition (in any form) is very appropriate for palliative or end of life/hospice wound care. If the individual’s condition is such that to provide supplemental nutrition increases discomfort and the prognosis is expected to be poor, then providing supplemental nutrition should not be a concern and is not appropriate for palliative or end of life/hospice wound care.
Local Treatment
PI management includes cleaning the open wound, debriding necrotic tissue, reducing risk of infection and biofilms, and using appropriate topical therapy. As PIs are healing, dressings with nonstick surfaces should be used when dressing changes are required frequently; using typical gauze dressings can interfere with healing by removing healed tissue when they are changed.
Cleansing Solutions
PI cleansing at each dressing change is recommended in clinical practice guidelines. Cleansing removes surface contaminants, remnants of previous dressings, and microorganisms on the wound surface. Saline and water have no antiseptic properties and should seldom be used on PI because these wounds are colonized with bacteria. Antiseptic cleansing solutions include hypochlorous acid, dilute sodium hypochlorite (0.25% “half strength” Dakin’s solution), and dilute povidone iodine (10% povidone with 1% free iodine [Betadine]). Hydrogen peroxide should not be used on open wounds. Some wound cleansers are cytotoxic to fibroblasts if used in high concentrations and should not be used for long periods of time. The general time frame is to apply the cleanser to the open wound for 10 to 15 minutes and then apply the new dressing. Surfactant antimicrobial cleansing solutions for use on the wound bed include polyhexamethylene biguanide and octenidine dihydrochloride. These solutions help to loosen slough in the wound bed.
Cleansing Lavage
The use of low-pressure lavage (4–15 lb per square inch [psi]) to mechanically lift fibrin and slough from PI has been recommended for many years. Several methods can be used to achieve this pressure, from items as simple as a needle and syringe to noncontact, low-hertz frequency ultrasonic mist to pulsed lavage systems. To remove the debris in the wound bed, the force of the irrigation stream must be greater than the adhesion forces holding the debris to the wound surface. Using wound cleanser as the lavage fluid provides additional benefits.
In general, if a PI contains necrotic debris or is infected, then reducing antimicrobial activity is more important than preventing cellular toxicity. The chemical and mechanical trauma of wound cleansing should be balanced by the dirtiness of the wound. For wounds with large amounts of debris, more vigorous mechanical force and stronger solutions may be used, while for clean wounds, less force and physiologic solutions such as normal saline can be used.
Dressings
Topical therapy for PI should be provided using moist wound healing dressings. Randomized controlled trials as well as several comparative studies provide compelling support for use of moist wound healing dressings instead of any form of dry gauze dressings for PI. Moist wound healing allows wounds to re-epithelialize up to 40% faster than wounds left open to air. Controlled trials suggest that the use of semi-occlusive dressings such as hydrocolloid dressings and foam dressings improve healing of stage 2 PI. These dressings are changed every 3 to 5 days, which allows wound fluid to gather underneath the dressing, facilitating epithelial migration. As noted above, regular gauze should not be used when dressing changes are more frequent; only dressings with nonstick surfaces should be used. Table 46-2 presents general characteristics of moisture retentive dressing categories.
TABLE 46-2 ■ GENERAL CHARACTERISTICS OF MOISTURE RETENTIVE DRESSING CATEGORIES
Df!ESSIN G cAזEGORY
OEFI NIYION tl�es
NCנTl:.S
Compo��סn:ssios | Co 1-.נננ,eoוrו dre�lו/ gwup 'l'�th •twtber tיכ ;iddf'l!s���ד:ruנוd char:נוc· l,נ,ti&tרGs. F'or ex mplc, g.wze/Co�m LldtnmS- p.-rt>n( rנlm d ing prope'rl�. hזd!QC'o!~ loid and algim.ti!:!i, l::!ic. | A�rlייe1tt (depeזtds on conבbi- 11.U.i<fו:1 ordressנn� \i:wd in t[\ cו;1mposill!} Wi� :r1 y�c nנoisLtltיC Nonadlנereווi r,o=uctו.dhed Use d pr.-נ'lld�011 t!w coמבbiוגntion <כ drcssirוg., 11Scd in 111;: c1נmpl)sitc |
blnc�varln1115,Jrc$5lח11,.1נicgoזy propc.rl tc;s |
זr R$Pilt�nr Fllוננ Dז 'ווg | P�lyur�llרMכ��niו Appזtipr'.i � f�r p.ד:rrinl• thid:tי1Ms pc,tyedוrlו:nemem l.\'O·un,.i!,s hr:.me film cooted wil.l1 �זrי0mט� epi1hdi.uiz.1li<:נn :ו]· yeJי o .ד.םיץlk hypס- �nןip13r1זנ,;.ןbJc 11ll@י&l!!ו1כc Jdh i\•�. BBc:ויerial l!l,�rri� :Mol�wre v-.ri:יוגr tri1וו�• ו\utQ!y is mi�ktn ra�s (M\''fR) \c'l'(נtlll� �יl51blc V'J(יf Pruit'(;L��iוו�! Incl,ion 1 .a,,iJhesive |
cxudak: | |
Hyd.ro oll.(:ds lkgtנl.וr or th}n wיkrs, ןm;ו�. gזBnul� | Gוגlatilt, r liננ. = AJ»c,:rbs lo��lo nוod.etזite,muנו.d bttי1;yו11ethylcellul,1:1�י1ז flו.1id in � po]yiSQbul !l!rn! Auttaי'lן·.sis Bdllנcsive b.uפ:\\יith a י1•1,-erננר,י1i lנuu�:itlon pc,l11t1re1iר�,סe o,r film J}�cו:triil.l haroo hiJ.!dנ.ןg Iזcdu.ccs pיiLJם 'l"r.uc!;/w:יt1ב! w upaqucי -�Y��ג ' PFI)· Cr,תםrruls odor uווtll dll!ss;iחg �mo,�d נ.111per:ו11e.1lנle to �נןוj•rי• rne.רhle | Oi וi1tUc-גt�d f. r J, vyexud�te, lirnlte,,,i :.ibsorbenl;i;bנ.hti1a:s,vlwrוu�M,:גhכ.ו1�
supp�� ed 11d di<:\.bcב,tlc p�tloots,�x:i~•l'ו• !iנ�buvוt5.;11�rcd lי'.'8toנוs | |
Biofilm Formation
PIs are colonized with bacteria and the longer the wound is open and not healed, the greater the risk for biofilm formation. PIs are typically colonized with greater than or equal to 105 organisms/mL of normal skin flora.
Although greater than or equal to 105 organisms/mL of normal skin flora can cause local infection in intact skin and impair wound healing in surgical wounds, chronic wounds such as PI may bear microbial growth at this level for prolonged periods without noticeable clinical manifestations of infection
and with evidence of some healing. While not healing, the wound stays inflamed. Chronic inflammation impairs healing because the proinflammatory cytokines (interleukin 1 and tumor necrosis factor) increase the level of matrix metalloproteinases (MMPs). The MMP family is a group of calcium- dependent zinc-containing enzymes that are involved in the degradation of extracellular matrix. MMPs play a crucial role in all stages of wound healing by modifying the wound matrix, allowing for cell migration and tissue remodeling. However, in chronic wounds, there is excessive expression of MMPs, which retards healing while promoting ongoing inflammation.
Biofilms are microbial communities in which the bacteria are encased in exopolysaccharide (EPS) and are less metabolically active than their free- living counterparts. Clinical studies show 60% of chronic wounds contain a biofilm. Biofilms also promote chronic inflammation. The nature of the EPS makes biofilms very resistant to endogenous antibodies and phagocytic cells and exogenous antibiotics and antimicrobial solutions. Bacterial biofilms may be the underlying pathology preventing PIs from healing. Biofilm is not visible to the naked eye, but it should be suspected in all chronic wounds.
The best method of preventing biofilm development is adequate, timely, and complete debridement of necrotic tissue followed by appropriate topical therapy. Cadexomer iodine, medical-grade honey, silver, and polyhexamethylene biguanide (PHMB) dressings retard the development of new biofilm. Regardless of the type of topical treatment, the biofilm will reform, and maintenance debridement is required.
Pressure Injury Infection
Infection is a common cause of nonhealing or worsening PI. PIs are chronic, ischemic wounds and as such, they are more susceptible to infection and they occur in malnourished or poorly perfused patients who cannot mount a full immune response. The most common organisms identified in pressure ulcers are Staphylococcus aureus, Proteus mirabilis, Pseudomonas aeruginosa, and Enterococcus faecalis. Infected PI also can serve as reservoirs for infections with antibiotic-resistant bacteria. Methicillin-resistant
Staphylococcus aureus (MRSA) colonization in infected PI is now reported commonly.
Stage 3, 4, and unstageable PI should be evaluated for acute and chronic infection. Tissue biopsy or quantitative swab technique to determine wound “tissue” bioburden is the preferred method to diagnose infection. Do not rely
on cultures of the wound surface or wound drainage; they will present only the surface contamination. Tissue biopsy is the gold standard, but is not always feasible. Levine’s technique uses light pressure on the wound bed to draw wound fluid for culture.
Osteomyelitis has been reported in 32% of patients with stage 4 PI. The strongest clinical indicator of osteomyelitis is palpable soft or rough bone or visible bone. Most infections are polymicrobial, with a predominance of S aureus, Enterobacteriaceae spp., and anaerobes. Plain film x-ray is useful in identifying probable osteomyelitis, but the final diagnosis is best made from MRI and bone biopsy with cultures. Management of osteomyelitis includes surgical debridement of infected bone and antibiotics. Infectious diseases specialists vary in their prescribed duration of antibiotics based in part on the level of infection (eg, infection limited to cortical bone may not require a full 6 weeks of therapy). Antibiotics are not risk free and often lead to diarrhea further contaminating the wound bed. When exposure to fecal matter cannot be controlled and the PI is contaminated from it, fecal diversion with catheters and diverting colostomy should be considered.
In patients with large, infected PI, or patients who cannot improve perfusion to a limb, more aggressive procedures such as amputation and hemicorporectomy are sometimes required. Surgical complication rates (including dehiscence, infection, necrosis, and hematoma) for both younger paraplegic patients and nonparaplegic older patients are as high as 50%, and PI recurrence at the same site has been reported ranging from 30% to 70%.
Thus, the long-term outcomes have not been ideal even though 70% to 80% of surgically treated PI are healed upon discharge from the hospital. Further, while recurrence of PI at the same site is lower for older patients (40%) compared to younger paraplegic patients (more than 70%), 30% of older patients develop new injury sites, and mortality in older patients ranges from nearly 50% to 68%.
Hyperbaric oxygen therapy (HBOT), delivered within a hyperbaric chamber, can be a useful adjunct in the nonoperative management of chronic refractory osteomyelitis. Chronic refractory osteomyelitis is loosely defined as osteomyelitis that persists despite appropriate medical and/or surgical management. Unfortunately, in PI associated with chronic refractory osteomyelitis, the soft tissue and bony involvement tends to be quite advanced, so that outcomes with HBOT are suboptimal. HBOT can also be utilized to improve perfusion in failing or threatened soft tissue flaps
following surgical reconstruction of advanced PI. Overall benefits of HBOT include:
Enhanced osteoclast function
Improved leukocyte bacterial killing
Stimulation of vascular endothelial growth factor and platelet-derived growth factor
Enhanced angiogenesis
Debridement
Wound debridement is recognized as an important component of wound bed management. It reduces devitalized or necrotic tissue, decreases risk for infection, and promotes granulation tissue formation. Benefits of debridement also may include removal of senescent fibroblasts and nonmigratory hyperproliferative epithelium, and stimulation of blood-borne growth factors. Maintenance debridement is a common phrase used to describe repeated, sometimes weekly, debridement of wound beds. Today, there are clinicians who examine and debride wounds in long-term care settings, reducing the need of the resident to leave the facility. In hospital settings, more aggressive surgical debridement can be performed under anesthesia.
Five methods of debridement (eg, surgical or sharp, mechanical, autolytic, enzymatic, bio-surgical) are available. Choice of debridement method is based on clinician preference and availability rather than specific evidence. Clinical practice guidelines on PI treatment recommend wound debridement with surgical or sharp debridement for extensive necrosis or when obtaining a clean wound bed quickly is important, and more conservative methods are recommended (autolytic and enzymatic) for wounds that are not acutely infected but rather stagnant in healing. Those patients tend to be in long-term care or home care environments.
Debridement should only be performed when there is adequate perfusion to the wound. A vascular assessment prior to debridement of lower extremity PI should be performed to determine whether arterial supply is sufficient to support healing of the debrided wound. For this reason, stable eschar on ischemic limbs is not debrided, without arterial supply, the wound cannot heal. Patients with significant comorbid burden or who are at the end of life should not be repeatedly debrided when healing is not possible. Debridement
of wounds may be done for control of odor or to unroof the wound and drain pockets of purulence.
Surgical debridement involves use of a scalpel, scissors, or other sharp instruments to widely excise the nonviable tissue. It is the most rapid form of debridement. A PI should be surgically debrided when there is a clinical need for extensive debridement; the degree of undermining and sinus tract or tunneling cannot be determined; there is advancing cellulitis; bone and infected hardware must be removed; and/or the individual is septic from the PI. Relative contraindications include anticoagulant therapy and bleeding disorders. Surgical debridement extends into viable tissue, and the resultant bleeding stimulates the production of bloodborne endogenous growth factors acting as chemoattractants for inflammatory cells and mitogens for both fibroblasts and epithelial cells. Health care professionals who use sharp debridement must demonstrate their competency in sharp wound debridement skills and meet licensing requirements. One multicenter, randomized, controlled trial comparing the effects of topical growth factor versus placebo on healing noted that independent of treatment effects, centers that used sharp debridement more frequently experienced better healing rates than those that used sharp debridement less frequently.
Sharp debridement can be performed with a scalpel, curette, scissors, or rongeurs. This form of debridement is done outside of the operating room but should be a sterile procedure. It is vital that health professionals who perform conservative sharp debridement possess knowledge of anatomy and adequate training and experience. Maintenance debridement is generally done using these methods.
Mechanical debridement involves the use of wet-to-dry dressings, wet- to-moist dressings, monofilament/microfiber debridement pads, and hydro- surgery. Wet-to-dry and wet-to-moist gauze dressings continue to be used for debridement, despite the significant disadvantages of increased labor time for application and removal of the dressings, removing viable tissue as well as nonviable tissue, and pain. This method of debridement should be used cautiously, as it can traumatize new granulation tissue and epithelial tissue, and adequate analgesia should be administered when this method is employed. It is not recommended in clinical practice guidelines. A monofilament/microfiber debridement pad removes slough and devitalized tissue, and potentially disrupts biofilm within the wound bed. The hydro- surgical water knife is an alternative tool to achieve surgical-type
debridement. Little evidence on its use exists in PI wound management. Clinical evidence in wounds of different etiology indicates that hydro- surgery can achieve faster debridement than other nonsurgical methods. Grossly infected PI can be surgically debrided and then continually lavaged with antiseptic instillations using negative pressure wound systems to draw the fluid from the wound.
Enzymatic debridement involves applying a concentrated, commercially prepared proteolytic or fibrinolytic enzyme to the surface of the necrotic tissue, in the expectation that it will aggressively degrade necrosis by digesting devitalized tissue. The main enzymatic ointment available in the United States is collagenase. A common practice is to use enzymatic debridement following sharp debridement. Enzymatic ointments have yielded consistently positive results for their efficacy in wound debridement.
Debridement with enzymatic ointments is faster than with autolysis, and more conservative than sharp debridement. Nanocrystalline silver can be inhibited when used in combination with collagenase, whereas iodine inhibits the activity of collagenase. Consult package inserts for details on which dressing materials are safely combined with collagenase.
Autolytic debridement is the process of using the body’s macrophages and proteolytic enzymes to remove nonviable tissue. Use of occlusive dressings maintains a moist wound environment that allows enzymes within the wound fluid to digest necrotic tissue. Autolytic debridement will be slower to achieve a clean wound bed than other methods. Autolytic debridement is contraindicated in the presence of untreated infection or extensive necrotic tissue, in large PI with undermining and sinus tracts, and in individuals with compromised immunity or severe malnutrition.
Biosurgery is the fifth method of debridement. Biosurgery is the application of maggots (disinfected fly larvae, Phaenicia sericata) to the wound typically at a density of 5 to 8/cm2. Comparative controlled studies evaluating the use of maggot therapy for PI debridement have shown a higher proportion of complete debridement in maggot-treated wounds versus
standard debridement therapy (80% vs 48%, respectively). Biosurgery may not be acceptable to all patients and may not be available in all areas.
Advanced Wound Therapy
Evidence supporting use of advanced wound therapy in PI is building. Collagen dressings are mostly derived from bovine, porcine, or avian skin
made into sheets and pads, as particles, and as gels. In chronic wounds such as PI, collagen lowers the elastase level, altering the chronicity of the wound and enhancing the healing process through dermal fibroblast proliferation, migration of the cells, and development of the capillary bed. Growth factors commonly stimulate the proliferation of neutrophils, macrophages, and keratinocytes, all of which are active in different stages of wound healing.
Exogenous growth factors, platelet-risk plasma, and recombinant human platelet-derived growth factor have also been shown to improve healing of PI. These fairly novel approaches to PI healing should be used when the wound bed is clean and free of infection and the patient’s ability to heal is enhanced with adequate nutrition and control of comorbid conditions that delay healing.
Compelling evidence exists for the use of electrical stimulation to treat stage 2 through 4 PI. Electrical stimulation has been strongly recommended in many guidelines with A level evidence from randomized clinical trials, yet the implementation of this treatment lags. Physical therapists often use electrical stimulation for this purpose.
Ultrasound is an acoustic therapy in which mechanical vibration is transmitted in a wave formation at frequencies beyond the upper limit of human hearing. Noncontact low frequency ultrasound therapy (NCLFUS) has been used for the resolution of deep tissue PI, but the quality of the evidence is modest. NCLFUS must begin while the deep tissue PI is still evolving, which is often an issue in using the therapy.
Negative pressure wound therapy (NPWT) applies vacuum to the wound to remove third space edema, thereby enhancing nutrient and oxygen delivery. It has its greatest efficacy in reducing wound volume, and therefore it can serve as an adjuvant therapy when combined with debridement and other treatments that promote healing, such as nutritional support and pressure redistribution. NPWT is intended for use in PI free of necrotic tissue.
Therefore, NPWT therapy should begin after debridement. PI that has failed to improve with standard care with moist wound healing and has poor granulation tissue or excess exudate can also benefit from NPWT.
Surgical Reconstruction
Surgical reconstruction of PI most often uses myocutaneous and fasciectomies flaps to close wounds. Flap surgery is considered when patients have been unable to heal a full-thickness wound, despite adequate
nutrition, control of risk factors (such as smoking), and adherence to offloading. Patients with nonhealing severe PI should be referred to an experienced surgeon to evaluate the eligibility for surgical repair of the PI. Expectations of the operation and the ability of the individual to tolerate surgery and surgical recovery should be discussed and understood.
Drugs
Pharmacologic interventions for PI include antibiotics and analgesics. Antibiotics may be systemic or local. Clinicians should institute systemic antibiotics for patients exhibiting signs and symptoms of systemic infection such as sepsis or cellulitis with associated fever and an elevated white blood cell count. Systemic antibiotics should be initiated for osteomyelitis or for the prevention of bacterial endocarditis in persons with valvular heart disease and who require debridement of a PI. Because of the high mortality of sepsis associated with PI despite appropriate antibiotics, broad-spectrum coverage for aerobic gram-negative rods, gram-positive cocci, and anaerobes is indicated in pending culture results in patients with suspected bacteremia. For oral therapy for methicillin-sensitive Streptococcus and methicillin-sensitive Staphylococcus, cephalexin, cefadroxil, dicloxacillin, and clindamycin are recommended. For suspected MRSA, clindamycin, amoxicillin with doxycycline, or trimethoprim-sulfamethoxazole can be used. When parenteral therapy is indicated, many drugs can be used but cefazolin, ceftriaxone, and clindamycin are generally used for empiric therapy.
Vancomycin may be required for MRSA. If anaerobic infection is suspected, metronidazole, a carbapenem, or a beta lactamase inhibitor should be used. PI with odorous drainage in patients who are not able to undergo debridement can be managed by applying ground metronidazole to the wound bed.
Antiseptics should be used in full-thickness PI. Topical cadexomer iodine, silver, honey, or PHMB dressings are the preferred antiseptics because they reduce the bioburden in the wound bed. Mupirocin is effective against MRSA. On the other hand, clinicians should not use povidone-iodine, iodophor, sodium hypochlorite, hydrogen peroxide, or acetic acid as topical therapies on clean and healing PI. These antiseptic agents have been shown to be toxic to fibroblasts and to impair wound healing in in vitro laboratory studies, and how these solutions affect human wounds is unclear.
Using nonpharmacologic pain management strategies to reduce pain associated with PI reflects good practice. There is no direct evidence from the literature search on the effectiveness of nonpharmacologic pain management strategies for treating pain associated with PI; however, nonpharmacologic pain management strategies are well-acknowledged as being useful in pain management.
PI-related pain can be minimized by keeping the wound bed moist and covered. Use of hydrogels, hydrocolloids, alginates, polymeric membrane foams, and soft silicone dressings allows for less frequent dressing changes, and less trauma and pain on removal as they are nonadherent to the wound bed.
Pharmacologic strategies for severe wound pain include providing opioids and/or nonsteroidal anti-inflammatory drugs (NSAIDs) 30 minutes prior to the procedure and afterward and administering topical anesthetics or topical opioids using hydrogels as a transport media. Two options have been successful for use in chronic wound pain, EMLA cream (eutectic mixture of lidocaine 2.5% and prilocaine 2.5%) and diamorphine gel. EMLA cream reduces debridement pain scores in chronic venous injuries and may have a cutaneous vasoactive effect. Use of EMLA cream in venous injuries has been associated with a reduction in pain scores (measured on a 100-mm scale) of 21 mm. Low-dose topical morphine (diamorphine) has been used in several small, randomized, placebo-controlled studies to successfully control PI- related pain.
PALLIATIVE PRESSURE INJURY TREATMENT
Not all PI can be healed. Palliative PI care means that the goals are comfort and limiting the extent or impact of the PI but without the intent of healing.
Palliative care may be indicated for terminally ill patients such as those with end-stage cancer or in the terminal stages of other diseases. Institutionalized older adults with multiple comorbidities or older adults with severe functional decline may also benefit from palliative care. Palliative PI care may include adequate debridement of necrotic tissue and identification and treatment of infection. PI should be dressed with highly absorptive dressings to reduce the frequency of dressing changes. Odorous drainage should be treated with metronidazole placed into the wound. The odor can be concealed with room deodorizers. Pain management should be a priority.
Prevention should still consist of use of reactive or active support surfaces
and attention to scheduled repositioning, although time frames may be adjusted or lengthened to ease the burden on the patient. Providing pain medication 30 to 40 minutes prior to repositioning activity and use of positioning devices may help those with pain on movement.
SUMMARY
PIs are chronic wounds and, as such, require patience and diligence by clinicians. Some PI never heal, and most require long periods of treatment with slow progress. Thus, identification of persons at risk for developing PI and aggressive prevention interventions to actively avoid PI from starting are essential. Prevention includes screening for risk followed by risk assessment using standardized risk assessment tools to determine individual specific risk and implementing targeted prevention interventions based on identified risk factors. Scheduled repositioning programs, use of reactive and active support surfaces, assessment and management of nutrition, and use of prophylactic dressings are key prevention strategies. More research is needed to define optimal turning intervals for various support surfaces, to better elucidate the effect of nutrition interventions on both preventing PI and healing PI.
For those persons who do develop PI, clinicians should provide appropriate treatment during early injury stages to capitalize on healing progress in the initial 3 months. Offloading and nutrition are imperative. Adequate, timely, and complete debridement of necrotic tissue, identification and treatment of infection and management of biofilm development, and providing a moist wound environment are the key tenets of appropriate PI care. All preventive and therapeutic interventions, and progress of the injury, should be carefully documented in the medical record. Unfortunately, no intervention or combination of interventions has demonstrated the ability to completely eliminate PI. Thus, even as we develop more refined and specific screening, detection, and preventive interventions, it is likely we will continue to see PI in all health care settings. The information presented in this chapter should provide a foundation for developing a successful approach to both those at risk for PI development and those with existing PI.
ACKNOWLEDGMENT
This chapter is updated from the previous editions written by Barbara M. Bates-Jensen, PhD, RN, FAAN, and Anabel Patlan, BS.
National Pressure Ulcer Advisory Panel, European Pressure Ulcer Advisory Panel and Pan Pacific Pressure Injury Alliance. In: Haesler E, ed.
Prevention and Treatment of Pressure Ulcers: Clinical Practice Guideline. Perth, Australia: Cambridge Media; 2019.
Padula WV, Delarmente BA. The national cost of hospital acquired pressure injuries in the United States. Int Wound J. 2019;16(3):634–640.
Schultz G. Bjarnsholt T, James GA, et al. Consensus guidelines for the identification and treatment of biofilms in chronic nonhealing wounds. Wound Repair Regen. 2017;25(5):744–757.
FURTHER READING
Incontinence
Camille P. Vaughan, Theodore M. Johnson, II
DEFINITION AND EPIDEMIOLOGY
Defined as the complaint of any involuntary leakage of urine, urinary incontinence is a common and bothersome condition in older adults. Incontinence prevalence increases with age and with increasing frailty, and is
1.3 to 2.0 times greater in older women than older men. Among community- dwelling older women, the prevalence of any urinary incontinence is approximately 35%; among older men, it is approximately 22%. The prevalence of daily urinary incontinence in older community-dwelling persons is approximately 12% for women and 5% for men. The prevalence is higher among nursing home residents approaching 60%. Incontinence ranges in severity from rare episodes of dribbling small amounts of urine to continuous urine leakage with concomitant fecal incontinence. In addition, many older people who do not “leak urine” may have bothersome lower
urinary tract symptoms such as urgency, frequency, and nocturia (waking from sleep at night to void) impacting their lives.
Physical health, psychological well-being, social status, and the costs of health care can all be adversely affected by incontinence. Urinary incontinence can be cured or greatly improved, especially in those who have adequate mobility and mental function. Even when not curable, incontinence can always be managed to improve comfort, reduce caregiver burden, and minimize costs of caring for the condition. Because many older patients are embarrassed to discuss their incontinence unaware that treatment is available, it is essential for providers to periodically ask about incontinence and to note this concern as a problem (Table 47-1). This chapter covers the basic pathophysiology of urinary incontinence in older persons, provides
detailed information on the evaluation and management, and briefly addresses fecal incontinence.
TABLE 47-1 ■ ASKING ABOUT URINARY INCONTINENCE
Learning Objectives
Identify different types of urinary incontinence based on clinical assessment.
Describe initial management strategies for incontinence in the older adult, which involve soliciting patient preferences and goals for care.
Key Clinical Points
Incontinence in the older adult is often the result of potentially reversible and modifiable conditions.
Multicomponent interventions, including lifestyle and behavioral therapies, are effective first-line treatments for incontinence in
Determine when referral to a urologic or gynecologic specialist is indicated.
the older adult.
3. Treatment of incontinence in the older adult, particularly drug therapy, should involve consideration of patient preferences and comorbid conditions.
PATHOPHYSIOLOGY AND CLASSIFICATION
Continence requires effective functioning of the lower urinary tract, adequate cognitive and physical functioning, motivation, and an appropriate environment (Table 47-2). Normal urination is a complex process and the neurophysiology of urination remains incompletely understood. Proper bladder filling and emptying are influenced by higher centers in the brain stem, cerebral cortex, and cerebellum. The brain stem facilitates urination and the cerebral cortex exerts a predominantly inhibitory influence.
Additionally, the loss of the suprapontine inhibitory influences over the sacral micturition center from diseases such as stroke and Parkinson disease can produce incontinence in older patients. Even in the absence of specific, overt neurologic lesions, poor bladder control is associated with inadequate activation of the orbitofrontal cortex and white matter hyperintensities (evidence of white matter wasting) in the right inferior frontal cortex.
Disorders of the brain stem and lesions rostral to the lumbosacral spinal cord can interfere with the coordination of bladder contraction and urethral relaxation leading to detrusor-sphincter dyssynergia. Interruptions of the sacral innervation can cause impaired bladder contraction and problems with continence.
TABLE 47-2 ■ REQUIREMENTS FOR CONTINENCE
At the most basic level, urination is governed by a reflex in the sacral spinal cord. During normal bladder filling, afferent pathways (via somatic and autonomic nerves) carry information regarding bladder volume to the spinal cord. Motor output is adjusted accordingly (Figure 47-1). Sympathetic tone closes the bladder neck and inhibits parasympathetic tone (thus relaxing the dome of the bladder); somatic innervation maintains tone in the pelvic floor musculature (including striated muscle around the urethra). Voluntary pelvic floor muscle contracture also leads to inhibition of parasympathetic tone. For bladder emptying, sympathetic and somatic tones diminish, and parasympathetic, cholinergic-mediated impulses cause the bladder to contract. Normal urination is a dynamic process, requiring the coordination of several physiologic processes. Under normal circumstances, as the bladder fills, bladder pressure remains low (≤ 15 cm H2O). The bladder
volume at first urge to void is variable, but generally occurs at between 150
and 350 mL; normal bladder capacity is 300 to 600 mL. When normal urination is initiated (usually every 3 to 4 hours during wakefulness), the detrusor contracts and detrusor pressure increases until it exceeds urethral resistance (which lowers immediately prior to bladder contraction). Urine flow occurs typically over less than 2 minutes. If at any time during bladder filling, total bladder pressure exceeds outlet resistance, urinary leakage occurs. Transmitted intra-abdominal pressure alone by coughing or sneezing may cause leakage in someone with low outlet resistance pressure or urethral sphincter weakness. Alternatively, the bladder can contract involuntarily and cause urinary leakage.
FIGURE 47-1. Central and peripheral nervous system involvement in micturition.
Risk Factors
As is the case for a number of other common geriatric problems, multiple disorders often interact to cause urinary incontinence. Determining the cause or causes facilitates proper management.
Several age-related changes can contribute to the development of urinary incontinence. In general, postvoid residual (PVR) urine volume is greater with increasing age. Reduced functional bladder capacity has been associated with advanced age, yet this may be due to the higher prevalence of involuntary bladder contractions and detrusor overactivity in older persons.
Involuntary bladder contractions are found in 40% to 75% of older incontinent patients, but also in 5% to 10% of older continent women and in up to one-third of older men with no or minimal urinary symptoms. Detrusor overactivity has been associated with specific anatomical findings (protrusion junctions and ultra-close abutment of detrusor muscle cells) on bladder biopsy and with evidence of cortical changes, which suggests impaired integration of bladder afferent signals. While involuntary bladder contractions do not always result in urinary incontinence, when combined with impaired mobility, these contractions likely account for a substantial proportion of incontinence in older functionally disabled patients. Aging in women is also associated with a decline in bladder outlet and urethral resistance pressure. While prior childbirths (either via vaginal deliveries or Caesarean section) are associated with a greater risk of future incontinence, this association is weaker for women older than 65 years. Additionally, in older populations, poor vaginal support is more closely associated with obstructive urinary symptoms than with urinary leakage and urgency. Obesity, deconditioned muscles, and hysterectomy predispose women to future development of incontinence. White women are more likely, as a group, to have stress urinary incontinence than women of other racial or ethnic groups. Oral estrogen therapy has also been identified as a risk factor for subsequent development of urinary incontinence in women.
Older men with prostatic enlargement may have decreased urine flow rates and detrusor muscle instability. Aging is also associated with more frequent nocturia, which may in part be related to higher urine production at night. Many older men and women have detrusor hyperactivity combined with poor bladder contractility (which has been termed “detrusor
hyperactivity with impaired contractility”). These individuals have evidence of widespread muscle degeneration on detrusor muscle biopsy.
Acute or Reversible Causes
The difference between acute (or reversible) forms of incontinence and persistent (or established) incontinence is clinically meaningful although not
always distinct. Acute incontinence refers to those situations where the incontinence is of sudden onset, usually linked to an acute illness or an iatrogenic problem, and subsides once the illness or problem has been resolved. Persistent incontinence refers to incontinence that is not precipitated by an acute illness and endures over time. Reversible factors related to acute incontinence may also contribute to persistent incontinence.
The potentially reversible causes of urinary incontinence are outlined in Table 47-3. These causes include impaired ability (or reduced willingness) to reach a toilet, conditions that affect the lower urinary tract, such as an infection, atrophic vaginitis, or surgical procedure, conditions that cause or contribute to polyuria, and iatrogenic factors. Because of urinary frequency and urgency, many older persons, especially those limited in mobility, carefully arrange their schedules (and may even limit social activities) in order to be close to a toilet. Thus, an acute illness can precipitate incontinence by disrupting this delicate balance. Hospitalization, with its attendant environmental barriers (such as catheters, tubes, lines, and bedside rails), and the delirium and immobility that often accompany acute illnesses in older patients can contribute to acute incontinence. Acute incontinence in these situations is likely to resolve with resolution of the underlying acute illness. In a substantial proportion of patients, incontinence may persist for several weeks after hospitalization and should be further evaluated.
TABLE 47-3 ■ REVERSIBLE CONDITIONS THAT CAUSE OR CONTRIBUTE TO URINARY INCONTINENCE IN OLDER PERSONS
Constipation and resultant fecal impaction are common in both acutely and chronically ill older patients. Impaction may cause mechanical distension of the bladder and outlet impingement that can interfere with adequate bladder emptying and cause reflex bladder contractions. Relief of a fecal impaction and effective treatment of constipation can lead to resolution of urinary, as well as fecal, incontinence (see Chapter 87, Constipation).
An elevated PVR should be considered in any older patient who suddenly develops urinary incontinence. In addition to fecal impaction, causes of incontinence with a high PVR in an older patient include immobility; anticholinergic, narcotic, calcium channel blocking, and beta- adrenergic medications (Table 47-4). In addition, urinary retention may be an acute manifestation of an underlying CNS process (eg, spinal cord compression or stroke).
TABLE 47-4 ■ MEDICATIONS THAT CAN POTENTIALLY AFFECT CONTINENCE
Inflammation of the lower urinary tract may precipitate or exacerbate incontinence. Atrophic vaginitis and urethritis are common among older women (part of genitourinary syndrome of menopause), which can result in dysuria, urgency, frequency, and even incontinence (see Chapter 36, Gynecologic Disorders). Physical signs include patchy erythema and increased vascularity of the labia minora and vaginal epithelium, petechiae and friability, and urethral erythema often with an inflamed caruncle (dark or bright red epithelium usually at the inferior aspect of the urethra). Topical estrogen therapy may be helpful in older women with these findings (discussed further later in chapter). Acute urinary tract infection can precipitate or exacerbate incontinence. However, urine loss among older patients with chronic incontinence, especially frail nursing home residents, with otherwise asymptomatic bacteriuria (with or without pyuria) does not appear to improve with bacteriuria treatment. These patients, therefore, should not be treated with antibiotics because of the costs and risks unless the incontinence is new or acutely worsened.
Diuretics (especially rapid-acting loop diuretics) and conditions that cause polyuria, including hyperglycemia and hypercalcemia, can precipitate acute incontinence. Patients with volume-expanded states, such as those with congestive heart failure and lower extremity venous insufficiency, may have polyuria at night, which can contribute to nocturia and nocturnal incontinence. As is the case in many other conditions in geriatric patients, a wide variety of medications can play a role in the development of incontinence in older adults (see Table 47-4). When feasible, stopping the
medication, switching to an alternative, or modifying the dosage schedule can be beneficial and may be the only necessary treatment for incontinence. In addition to medications, drinking multiple caffeinated or alcoholic beverages can cause urinary frequency and urgency, which may precipitate
incontinence.
Persistent Incontinence
Table 47-5 lists the clinical definitions and common causes of persistent urinary incontinence. These types can overlap with each other, and an individual patient may have more than one type simultaneously. Incontinence results from one or a combination of two basic abnormalities:
TABLE 47-5 ■ BASIC TYPES AND CAUSES OF PERSISTENT URINARY INCONTINENCE
Failure to properly store urine, caused by a hyperactive or poorly compliant bladder or by diminished outflow resistance; and/or
Failure to properly empty the bladder, caused by a poorly contractile bladder or by increased outflow resistance.
Stress incontinence is common in older women, especially in ambulatory
clinic settings. The symptoms of stress incontinence are very specific: leakage coincident with increases in intra-abdominal pressure caused by coughing, sneezing, laughing, or exercising. Stress incontinence may be infrequent and involve very small amounts of urine. It may need no specific treatment in women who are not bothered by it; on the other hand, it may be so severe and/or bothersome that it renders the person housebound. Among women, it is most often associated with weakened supporting tissues, resulting in hypermobility of the bladder outlet and urethra and caused by lack of estrogen, obesity, previous vaginal deliveries, and/or surgery. Some women, generally those who have had previous lower urinary tract surgery,
have intrinsic urethral weakness with failure of the urethra to fully close and prevent urine loss. These patients tend to have severe incontinence and occasionally have constant wetting. Stress incontinence is unusual in men, and it mainly occurs following transurethral interventions for benign prostatic conditions or after surgical or radiation therapy for lower urinary tract malignancy when the anatomic sphincters are damaged.
Urgency incontinence can be caused by a variety of lower genitourinary and neurologic disorders (see Table 47-5). This type of incontinence is characterized by a sudden strong desire to void, accompanied by a fear of leakage, and followed by urine loss. The amount of urine lost can be large but varies depending on sphincter function and the ability of the patient to abort a bladder contraction. Urgency incontinence, when it occurs along with urinary urgency, daytime urinary frequency, and nocturia, has been called “wet overactive bladder.” Urgency incontinence is most often, but not always, associated with involuntary bladder contractions. Some patients have a poorly compliant bladder without involuntary contractions (eg, interstitial cystitis or following irradiation). A subgroup of older incontinent patients with detrusor hyperactivity also have impaired bladder contractility, emptying less than one-third of their bladder volume with involuntary contractions on urodynamic testing. These patients may be predisposed to significant urinary retention and may require training to learn to completely empty their bladder with voiding.
The use of the term “overflow incontinence” has come in and out of favor, but related terms such as acute or chronic urinary retention and (either stress or urge) incontinence with a high PVR are also common. Acute retention of urine is “a painful, palpable or percussable bladder, when the patient is unable to pass any urine”; and chronic retention of urine is where the patient has a “non-painful bladder, which remains palpable or percussable after the patient has passed urine. . . (and) the patient may be incontinent.” A high PVR can result from anatomic or neurogenic obstruction to urinary outflow, a hypotonic or acontractile bladder, or both. Prostatic enlargement, diabetic neuropathic bladder, and urethral stricture commonly cause urinary symptoms and may occasionally cause incontinence. Low spinal cord injury and anatomic obstruction in women (caused by pelvic prolapse and urethral distortion) are less common causes of overflow incontinence in older patients. Several types of drugs also can contribute to this type of persistent incontinence (see Table 47-4). Some patients with
lesions rostral to the lumbosacral spinal cord lesions (such as multiple sclerosis) develop detrusor–sphincter dyssynergia and consequent urinary retention, which must be treated similarly to overflow incontinence; in some instances, a sphincterotomy is necessary.
Functional incontinence results when an older person is unable to reach a toilet on time due to impaired cognitive function and/or mobility.
Recognizing and removing barriers to continence, such as inaccessible toilets and psychological disorders, is critical. These factors also exacerbate other types of persistent incontinence. Patients with functional factors contributing to incontinence also may have abnormalities of the lower genitourinary tract, most commonly detrusor overactivity. In some patients, it can be very difficult to determine whether the functional factors or the genitourinary factors predominate without a trial of specific types of treatment.
Many older patients have more than one type of incontinence. Most common are the combination of urgency and stress incontinence (often called mixed incontinence) among older women as well as the combination of urge and functional incontinence among nursing home residents.
In addition, many older patients have a syndrome of “overactive bladder,” where urinary urgency is present sometimes with urinary
symptoms such as frequency, and nocturia, and may or may not be continent. These patients should be assessed and treated similar to patients with symptoms of urgency incontinence.
EVALUATION
Guidelines recommend a basic diagnostic evaluation, which includes a history (which can be enhanced by a bladder diary record), a physical examination, and a urinalysis. A PVR may not be needed in all older patients with incontinence, but reasonably should be done for patients at risk for urinary retention (see below). Other diagnostic studies may be indicated in selected patients (Table 47-6). Figure 47-2 summarizes the recommended diagnostic evaluation of incontinent older patients.
TABLE 47-6 ■ COMPONENTS OF THE DIAGNOSTIC EVALUATION OF PERSISTENT URINARY INCONTINENCEA
some pat.ient.s)
Ta1-geted phy�ו.cal exami_11at1on U1·i1וaly is
P,o tvoid resjdual determ,inatioז11נ
Selected Patients
Laboratory tu.die$ Urilוe cultmר Ur.i11 •cy,tology
Bloo,d glucose) caldtun
Reתal fun tio11 t st
Re11al aזוd/or bladder ultra ound
יGyneoo,l,o,gic evaluation Urס•lo,gi, eval 1ation Cystouretbro,soo,p Urody11a1nic te ts
··mple
Obser a i.on ,o . o,idi11g
Stand1rtg cougl1 test for stress inconti11enc Stmple ( i1 gle cha.nn,e]) cy toנn try
U in flowmetry {,or m n)
Complex
Mult'cl1a·n11 l cy tom trogram
P1· 1re-fio,w tudy
Leak poi11.t pressure
Vide-ourodyתamic
• ••e�111:so Tiוble 47-7.
b'"flוc rcco.וורוזוeווdatioת lor a postvoid r. 5,id,1ai det ז:111i11o1atioוו iח aJJ.iנ1contiווe,ווt elde,rly
Ure hז·al p:וres.sure profilס1:11etry Sphi11ct r ctromץo·g1·aphy
pati ווts is coווtrovcrsiai{seי t:ext).
Reprodi�ted witlו p 1·זi·1issio1ו fזסזח Kcז11 RL. Ouslaווder }G, Res11ick B, ct al. &siז.חlials סf
(�liו1ical Geriaiזics. 8fl1e,f. Ne'\1\1 York, 11,ry: McGra111 Hi!I; 2018.
FIGURE 47-2. Summary of assessment and initial management of geriatric urinary incontinence. (See referenced tables and text for details.)
The objectives of the basic evaluation are threefold:
To identify potentially reversible conditions potentially contributing to incontinence (see Table 47-3).
To identify conditions requiring further diagnostic tests and/or gynecologic or urologic referral.
To develop a management plan, which may include referral for further evaluation or a therapeutic trial of behavioral and/or pharmacologic therapy.
In patients with recent or sudden onset of incontinence (especially when following an acute medical condition and/or hospitalization), the potentially reversible causes of acute incontinence (see Table 47-3) should be ruled out by a brief history, a physical examination, and basic laboratory studies including urinalysis, culture, and tests for serum glucose or calcium, if indicated.
The history should focus on characteristics of the incontinence, including current problems and medications, and on the impact of the incontinence on the patient and caregivers. The incontinence should be characterized in terms of frequency, amount, and timing of leakage; and symptoms of voiding difficulty including hesitancy, intermittent stream, and straining to void.
Symptoms of urgency versus stress incontinence should be sought, recognizing that symptom history does not perfectly predict subtype of urinary incontinence. For those with nocturia, or nocturnal incontinence, questions regarding sedating medications and/or sleep dysfunction are indicated. Bladder diary records, such as the one shown in Figure 47-3, can be helpful in characterizing symptoms, as well as in following the response to treatment. The physical examination includes abdominal, rectal, and genital examinations, as well as an evaluation of lumbosacral innervation.
The abdominal examination is insensitive for an elevated PVR or chronic urinary retention, but gross bladder distention (eg, ≥ 500 mL) should be easily detected. In acute urinary retention, the distended bladder is a firm, midline mass that emanates from the pelvis and is dull to percussion. In gross distention with either acute or chronic retention, the superior margin of the bladder is often identifiable by either palpation or percussion.
FIGURE 47-3. Example of a bladder record for ambulatory care settings. (Adapted with permission from Diagnosis of Bladder Control Problems (Urinary Incontinence). How do doctors find the cause of a bladder control problem? National Institute of Diabetes and Digestive and Kidney Diseases (NIDDK). National Institutes of Health.)
The pelvic examination in women includes inspection for significant prolapse, signs of vaginal tissue inflammation, and a cough test to detect stress incontinence. The cough stress test can corroborate stress urinary incontinence (UI) symptoms and can be performed with the patient standing or in the lithotomy position. Sensitivity is highest when the patient is standing. The patient should have a comfortably full bladder at approximately half-capacity (200 mL) and is instructed to cough vigorously once (followed by three additional coughs, if negative). It is insensitive if the patient cannot cooperate, is inhibited, or the bladder volume is low. Leakage simultaneously with coughing documents stress incontinence; delayed leakage (eg, after 3 seconds) or the initiation of voiding generally indicates a cough-induced bladder contraction.
Special attention should be given to assessing mobility and mental status, because impairments may be either causing the incontinence or interacting with urologic and neurologic disorders to worsen the condition. Patients with nocturia or nocturnal incontinence should be examined for signs of congestive heart failure or venous insufficiency with edema.
Urinalysis should be performed to look for evidence of infection, hematuria, and glucosuria. Clean urine specimens are often difficult to obtain from frail incontinent patients, but can be performed reliably without first resorting to in-and-out catheterization. For men who cannot void spontaneously, a condom-type catheter can be used after cleaning the penis to collect a specimen that accurately reflects bladder urine. While there is a clear relationship between acute symptomatic urinary tract infection and incontinence, the relationship between asymptomatic bacteriuria and incontinence is controversial. In nursing home populations, there is no benefit to treating bacteriuria in patients with chronic, stable incontinence. For patients in other settings, it is difficult to make clear recommendations. For the initial evaluation of incontinence among noninstitutionalized incontinent patients, it is reasonable to initially eradicate the bacteriuria to observe the effect on the incontinence.
A determination of PVR should be performed in patients at risk for retention, including those with diabetes with neuropathy, neurologic disorders, symptoms of voiding difficulty, or a history of urinary retention, and those taking medications with significant anticholinergic effects. Neither the history nor the physical examination is sensitive or specific enough for this purpose in geriatric patients. In these patients the PVR determination can be done by portable ultrasonography if equipment is available. To be accurate, the PVR determination should be done within a few minutes of a spontaneous continent or incontinent void. PVR values of less than 100 mL in the absence of straining to void generally reflect adequate bladder emptying in geriatric patients, whereas PVR values greater than 200 mL are abnormal. Values between 100 mL and 200 mL must be interpreted in the context of other patient symptoms. Noninvasive measurement of urinary flow rate in men may be helpful in identifying obstruction and/or bladder contractility problems.
Clinical practice guidelines do not recommend a complex urologic, gynecologic, or urodynamic evaluation for all incontinent older patients. Many patients can be treated with a trial of behavioral and/or drug therapy
after a basic evaluation is completed and potentially reversible factors are addressed. Table 47-7 lists examples of criteria for referring incontinent geriatric patients for further urologic, gynecologic, and/or urodynamic evaluations.
TABLE 47-7 ■ EXAMPLES OF CRITERIA FOR REFERRAL OF INCONTINENT GERIATRIC PATIENTS FOR FURTHER UROLOGIC, GYNECOLOGIC, OR URODYNAMIC EVALUATION
MANAGEMENT
Several therapeutic modalities are used in managing incontinent patients (Table 47-8). Special attention should be paid to the management of acute incontinence, which is common in older patients in acute care hospitals or nursing homes. Unfortunately, older incontinent patients may be managed in the acute hospital setting with indwelling catheterization. Rarely, this is
justified by hemodynamic instability and the hourly need to assess urine output during the acute phase of an illness, yet often it is unnecessary. This practice poses a substantial and unwarranted risk of catheter-induced infection and interferes with mobility. Widespread use of quality indicators may reduce the frequency of these practices. Although time-consuming and more difficult, making toilets and toilet substitutes accessible and combining this accessibility with some form of scheduled toileting is a more appropriate approach. Indwelling catheters may be appropriate in the palliative setting at end of life. All the factors that can cause or contribute to a reversible form of incontinence (see Table 47-3) should be attended to in order to maximize the potential for regaining continence.
TABLE 47-8 ■ TREATMENT OPTIONS FOR URINARY INCONTINENCE IN OLDER ADULTS
Management should be guided by the type of incontinence, and more importantly by patient and/or family preferences. Patients should be carefully questioned about the degree of bother the incontinence is causing and how much risk and cost they are willing to undertake to address it. Supportive measures are critical in managing all forms of incontinence and should be used in conjunction with other, more specific treatment modalities.
Education, environmental manipulations, appropriate use of toilet substitutes, avoidance of iatrogenic contributions to incontinence, modifications of
diuretic and fluid intake patterns (especially caffeine), treatment of constipation, and good skin care are all important. Additionally, in women there is evidence that modest weight loss (approximately 5%–10% of body weight) leads to reduction in urinary incontinence episodes.
Specially designed incontinence undergarments and pads can be very helpful in many patients, but they must be used appropriately. Although they can be effective, several caveats should be raised:
Garments, external devices, and pads are a nonspecific treatment.
Many patients are curable if treated with specific therapies, and some have potentially serious factors underlying their incontinence that must be diagnosed and treated.
Patients often prefer more specific incontinence therapy designed to restore a normal pattern of voiding and continence.
Incontinence devices, garments, and pads are expensive and rarely covered by third-party payers.
To a large extent, the optimal treatment of persistent incontinence
depends on identifying the type or types. Table 47-9 outlines the primary treatments for the basic types of persistent incontinence in the geriatric population. Each treatment modality is briefly discussed below.
TABLE 47-9 ■ PRIMARY TREATMENTS FOR DIFFERENT TYPES OF GERIATRIC URINARY INCONTINENCE
Behavioral Interventions
Many types of behavioral interventions are available for the management of urinary incontinence. These may be categorized as patient dependent (ie, require adequate function and motivation of the patient), in which the goal is to restore a normal pattern of voiding and continence, or caregiver dependent, which can be used for functionally disabled patients, in which the goal is to keep the patient and the environment dry. Table 47-10 summarizes these interventions. The patient-dependent interventions generally involve the patient’s continuous, self-monitoring use of a bladder diary record such as the one depicted in Figure 47-3. In several studies, behavioral therapies are equivalent to drug therapy, with approximately
three-fourths of patients reporting improvement. Frequently, these behavioral interventions are the preferred treatment modality by patients.
TABLE 47-10 ■ EXAMPLES OF BEHAVIORAL INTERVENTIONS FOR URINARY INCONTINENCE
To be successful, patient-dependent interventions require a functional, motivated patient capable of learning and practice. These interventions also require a skilled, enthusiastic trainer and frequent patient contact, though positive results can be achieved by motivated patients who are self-directing their behavioral treatment following brief consultation with a provider or through assistance of a pamphlet or book. Pelvic floor muscle (Kegel) exercises are effective in men and women for the treatment of urge, stress, mixed stress–urge incontinence, and incontinence following prostatectomy.
These exercises consist of repetitive contractions of the pelvic floor muscles. These exercises can be taught by brief verbal or written instructions, or instruction provided by having a patient squeeze the examiner’s inserted
finger during a vaginal or rectal examination (without doing a Valsalva maneuver, which is opposite of the intended effect). Once learned, the exercises should be practiced many times throughout the day (eg, three sets daily of 15 contractions building up from 3 seconds to 10 seconds in duration). Computer-assisted or manual biofeedback can be especially helpful for teaching patients who bear down (increasing intra-abdominal pressure) when attempting to contract pelvic floor muscles. Biofeedback involves the use of bladder, rectal, or vaginal pressure or electrical activity recordings, or the examiner’s finger in the vagina or rectum with a hand placed on the abdomen, to train patients to contract pelvic floor muscles while leaving the abdominal muscles relaxed. Electrical stimulation, introduced either vaginally or rectally, and magnetic stimulation have also been used to help identify and train muscles in the management of stress incontinence and inhibit involuntary bladder contractions in patients with urgency incontinence. The applicability of pelvic floor electrical or magnetic stimulation is limited because of equipment needs and that it may not be acceptable to many older patients in the United States.
Once these muscles are strengthened, and better muscle control is achieved, the patient must be taught to use the exercises in everyday life under circumstances that precipitate the incontinence in order to be effective. With practice, a pelvic floor muscle contraction can be done immediately prior to a cough, laugh, or sneeze to assist with stress incontinence; and used in rapid serial contractions to abort detrusor contractions associated with urinary urgency. Pelvic floor muscle exercises are also effective in certain situations to prevent urinary incontinence and, if done preoperatively, allows a more rapid return to continence in the postoperative period.
Other forms of patient-dependent interventions include bladder training and bladder retraining. Bladder training involves the educational components taught during biofeedback, without the use of biofeedback equipment.
Patients are taught pelvic muscle exercises and urge suppression strategies to manage urgency and are taught to use bladder diary records regularly. There is some evidence that these techniques are as effective as drug therapy in cognitively intact, motivated older adults. Bladder retraining as described here is used primarily after a period of temporary bladder catheterization.
Table 47-11 is an example of a bladder-retraining protocol. This protocol is applicable to patients who have had an indwelling catheter for monitoring of urinary output during a period of acute illness or for the treatment of urinary
retention with overflow incontinence. Such catheters always should be removed as soon as possible, and this type of bladder-retraining protocol should enable most indwelling catheters to be removed from patients in acute care hospitals as well as some residents in long-term care settings. A patient who continues to have difficulty voiding after 1 to 2 weeks of such a
bladder-retraining protocol should be examined for other potentially reversible causes of voiding difficulties. When difficulties persist, a urologic referral should be considered.
TABLE 47-11 ■ EXAMPLE OF A BLADDER-RETRAINING PROTOCOL
The goal of caregiver-dependent interventions such as scheduled toileting, habit training, and prompted voiding is to prevent incontinence episodes rather than to restore a normal pattern of voiding and complete continence. Highly motivated caregivers and cooperative patients are essential for these interventions to be successful. Scheduled toileting involves assisting the patient to the toilet at regular intervals, usually every 2 to 3 hours during the day regardless of the presence or absence of the patient’s expressed desire to void. Habit training involves a schedule of toileting that is individually modified according to the patient’s pattern of continent voids and incontinence episodes. Prompted voiding is a behavioral
protocol that involves focusing the patient’s attention on his or her bladder by asking if the patient is wet or dry, asking (prompting) the patient to attempt to void (up to three times) every 2 hours during the day, toileting the patient if he or she responds positively, giving praise as a social reward for attempting to toilet and maintaining continence, and offering fluids routinely. Between 25% and 40% of nursing home residents respond very well to daytime prompted voiding, and these responders can be identified by carrying out a
3-day trial of the intervention. Care for incontinence at night should be individualized. Routine incontinence care can be very disruptive to sleep. Because older people tend to awaken frequently at night, one way to individualize toileting in a long-term care setting is to check on the patient every hour or two, and only prompt to toilet when the patient is found awake.
Drug Treatment
Table 47-12 lists the drugs used to treat incontinence. In general, carefully selected older research participants have been able to achieve roughly equivalent efficacy from drug treatment compared to their younger counterparts. Drug treatment should generally be prescribed in conjunction with one or more behavioral interventions.
TABLE 47-12 ■ DRUGS COMMONLY USED IN THE UNITED STATES TO TREAT URINARY INCONTINENCE
DRIJ6S
Btod'der RelQxtm :for Urgen y bו.,;,כ, tineנגce
Antiח1118C&riחic
Dוגrif�רa,Jt1( .fן(lb,]cx
Ft5ot�Jגoo fl(l';·iaz)
0;,;y!)\Lt},'lננ:rו.(lכjtrop;m,
jmnוcdiBl:c rclcasc, a\laila:b]e
.ד�: S!!'נר.שיi.::)
P ttb (Oxyt.r<י.l)
1111 זw,:-)
Sot\ren�nciודוtV: בic:ill'eכ
D0י5AGES I t.1'f(J-iANISiM:S.Of ACTIOיN
! n gcכוeודוl11כt'Luced h;ו( g frנ
rei נal ו.aוd lוep, tii:]11גpa.imוe1וt
ln.cre:גse- �ladder eapactt}�
diminisiו in\·t1IL1111:iזן·bl�dde. ronl;ra.:lion�
7.5- וS d,
-8m qd
2.5-כ.0 0111!id
).9 r1\�qd
Pגtז h נppl'ied t,vj,:;� wccl:ly
}¼ 1,'Yll/1,' iQ'JI> �� �;;.\wt
0111G בpplitaliun (baily
:כl qd
POTlaNTIAI.AE>VERSE EFFECTS
Aנוti oo1in�rgi , lmיfcr dסsc i f
�iu.�d hepati� un tl0a1
AI,ti"coo.l.i.n rgi ו:1tkh1.1llוwcgic (dry תwד.11lר.,
W111r;ry vi�io11, c,l,=•.-iוtוכd inlmcגculro:
p��, dcliritוm, cmנs.ti,pm.ic:ג1ו}
bove. but with l� di- ג1wuth, av· ila1v]1ג ove-r-tbe•COOJ!te.r
�i.1, L!l with lm: dry mw 1ן
A.nticooJנn.eיrgic. lu,\iet for11�v·c�
et\;1,I Jו-ip1!1 menc r Nd�ו,:;.�d
cג!l,;ro,li ,; (])�ו- ·oJJ
1oil[emdlונe (Diנl«11LA)
L-2 nוgb"d
-י111i1sqd
2'0יmgbld
-------+ +-h��t�: fo�י:!_l?_ו:ו
ntir.0011 gj,, IO"י'4ח do�t for se-�· r,I!'ו:ו!זוal impנimו�t טו' נr(יdua:d h�p�!ic functioנ1
Above, tM.tt wlth J-css dl"y ד\mtth r\ ntlcliw.lנne.rgl,
'l"rmpiשנt chloridיr
{Si1חclura XR.1
Btla-J- ·goוגi. t
!י.1.iנm�ו:gro11(My l;וruי'i.-:ן}
Vi gnב"!ו11 (G)���ווזי�
a,gina] Eוi.t:rog,m•
'Iopi i1וl
\r�glnal rוng (�trדl;ris)
{1!!.tr,גfl nl a!;]!mt }
Druןr; fu:r-,_tffi,�
[11 -n11tincn1:•
בc n1g .נו'lt� d נ1y ql\� \'l'ilh CrCI
< J(J Q.r iנר ןנ�• l{יf\ti;>ךs fti
Cו'י" on 11:11 enגpiy. tomach
> l 111.ן�rזנfuי a mc:גl
OOmgqam
Gi ·e סfi a1ן eזונpiץ :rtwn� h
כ> l lו lיefor�;,. me:גl
25-00 rns; qd
m"gqd
0.5-1.פ g, pcr af"tיti�;iוi0n
One rfng ev.eוך 3 niי0
lntרilבנt.s lיl1.1dd�r oonוr11ו:linn
$tזt nstl1ס11 p� ral iis- i'ueנ גnd mdUt1ב"s imlג mו:ו.1 • נ:וחtו fmm .:ו_ tfophlו: vag••hו�ו
ln,rca5,;- 11n;:lhr 1�1n0911זן.
1ז\ו;1i:ו;I,; @ןtr:גtlו(lq\
Av:o1d: j,נ s.evl!W �-ס11גl 1.mpaciir•
נm.:nl or rtdt111:1:d hו:p.וlic
.i'ש1t;tlQU
Bי:tג-3-;.ן8י('Ill5i, nוax 25 1כ&ft'I
,יtitlו CזCI 15,-גQ, FOO!i d
litep.;נ.ticf1111 ו.io1ה (l1yper'k:נ1,si611,
tגch ·c;נ:rdiג}
Bet:וו-3,. g:ci11·i,!:I, avc-ג ו:J in sev:ere ז,tnrו,I,נr ll��tic: jו'ן'lpBirוro�rוlי,; hוeיiו.d,t<:'J;c
Conlr.גi.ווtlוe:atoo.iוו \\'6.L�וט1 witll
:i persoג1:11hls!O'זf o gy.neooJ.ogic 011per, rd t!� wnוr.ר nd c:וtוc.m iiו \Mו\\,:,גר it"\ן;ו �ו'Mוח�I ן;;·,•ilf'f b-re�gr,\.'!!ilז\GCF
For urgency incontinence, drugs with bladder smooth-muscle relaxant properties are used and are available in two different classes: antimuscarinic (anticholinergic) and beta-3-agonist (noradrenergic). Antimuscarinic drugs are available in immediate release, controlled release, and topical preparations, while beta-3-agonist therapy is available in a once daily formulation. Most studies suggest a reduction of 60% to 70% in the frequency of incontinence episodes with drug therapy in selected older
adults. While different drugs are likely equivalent on average, a particular patient who does not respond to one drug (either because of lack of efficacy or presence of adverse effects) may benefit from a trial on a different agent. Some patients may not respond for 2 weeks or even longer, so a 2- to 4-week trial should be undertaken when a new drug is prescribed.
Antimuscarinic drugs may have recognizable, bothersome systemic anticholinergic side effects such as dry mouth and constipation, which are most common with immediate-release formulations. They should be used carefully in patients with glaucoma and severe gastroesophageal reflux.
Patients with Alzheimer disease and other forms of dementia must be followed for the development of drug-induced delirium when placed on antimuscarinic medications because of their anticholinergic effects. While the short-term cognitive effects of these agents are well understood, multiple epidemiologic studies suggest bladder antimuscarinic drug therapy (dose intensity, duration) is associated with a 50% relative increase in the odds of developing dementia. The results of these studies do not, however, preclude a treatment trial in the older adult population in the context of shared decision making. Beta-3-agonists could potentially lead to elevated blood pressure and should be not be used in persons with uncontrolled hypertension. Bladder relaxants may rarely precipitate urinary retention in some patients; men with some degree of outflow obstruction, diabetic patients, and patients with impaired bladder contractility are at the highest risk and should be followed carefully.
For older men with overactive bladder symptoms (with or without urinary incontinence), alpha adrenergic antagonists may be a better choice for first-line drug therapy. Bladder relaxants may also be used as single drug therapy in men, and the combination of a bladder relaxant and an alpha antagonist combined with pelvic floor muscle exercise-based behavioral therapy may be more effective in many older men.
Drug treatment for stress incontinence is less efficacious than is drug treatment for urgency incontinence, and no drug is approved for this indication in the United States. Duloxetine, a drug approved for depression in the United States, also has alpha adrenergic effects on the lower urinary tract through a spinal cord mechanism, and is recommended in other countries only as second-line therapy for the treatment of stress incontinence for those who prefer medication to surgical treatment. More recent studies have shown, for either opposed or unopposed oral conjugated estrogen therapy, an
increased risk of developing or worsening of incontinence. Topical estrogen either chronically or on an intermittent basis (ie, 1- to 2-month course) may be effective for the treatment of irritative voiding symptoms and urgency incontinence in women with atrophic vaginitis and urethritis. Although no specific topical treatment regimen has been shown to be more effective than others, therapy usually involves 0.5 to 1 g of vaginal cream nightly for 1 to 2 weeks and then a maintenance dose two or three times per week or a controlled, slow-release vaginal ring. Several months of therapy are often necessary to observe therapeutic benefit.
Many older women have a combination of both urgency and stress incontinence or mixed incontinence. Treatment of mixed incontinence should initially target the symptoms the individual finds most bothersome.
Drug treatment in the setting of chronic urinary retention or urinary incontinence with high PVR, with either a cholinergic agonist or an alpha- adrenergic antagonist, is usually not efficacious. Although alpha-adrenergic blockers and 5-alpha reductase inhibitors are useful for treatment of symptoms suggestive of benign prostatic hyperplasia, they may not obviate the need for surgical intervention or requirement for catheter drainage in patients with bladder outlet obstruction who have chronic urinary retention or urinary incontinence with high PVRs (ie, residuals consistently greater than 200–300 mL).
Surgical Approaches
Surgery is a well-established treatment for stress urinary incontinence. Surgery should be considered for older women with stress incontinence and for women with a significant degree of pelvic prolapse associated with stress incontinence or incontinence with urinary retention who are unresponsive to nonsurgical treatment. As with many other surgical procedures, patient selection and the experience of the surgeon are critical to success. Any woman being considered for surgical therapy should have a thorough evaluation, before undergoing the procedure. Urodynamics are not necessarily required for stress incontinence or stress-predominant mixed incontinence without voiding difficulties. In general, surgical treatment is designed to correct urethral closure problems and to remedy defects in support of the urethra–vesicular angle. Mid-urethral slings with a synthetic mesh or the use of periurethral collagen injections are outpatient procedures. A pubovaginal sling using autologous fascia and bladder neck suspension
using the Burch culposuspension procedure also have high rates of initial success, although subjective treatment outcomes are strongest with the mid- urethral sling. Additional long-term studies are needed.
Men with stress urinary incontinence (and usually with continuous urinary leakage) can be treated with the implantation of an artificial urinary sphincter (or other similar mechanical device designed to reversibly block urine outflow in the penile urethra). Newer “sling” procedures have been used in men, also.
While procedural approaches have previously been considered only for stress incontinence, there are also approaches for the treatment of refractory urgency incontinence. Approaches include neuromodulation and botulinum toxin A injection. Percutaneous tibial nerve stimulation (PTNS) is a minimally invasive procedure, which provides electrical stimulation using a small needle electrode placed in proximity to the posterior tibial nerve as it travels near the medial malleolus of the tibia. Patients undergo weekly 30- minute stimulation sessions in an outpatient clinical setting for up to 12 weeks. PTNS is generally well tolerated with minimal discomfort and has demonstrated effectiveness in adults without underlying neurogenic bladder. Sacral neuromodulation involves a two-stage operation with general anaesthesia and implantation of a pacemaker-like generator near the hip and sacral leads to stimulate the pudendal and sacral nerves. The procedure can be a safe, effective, and durable treatment, though cure rates for older adults and those with multiple medical comorbidities have been lower than in younger, healthier populations. The use of botulinum toxin A for refractory urgency incontinence has proven effective in several randomized, double- blinded, controlled studies, and is approved by the US Food and Drug Administration. Botulinum toxin A is accomplished via cystoscopy under direct visualization with injections into multiple bladder sites. Benefit is believed to be because of an effect on both efferent and afferent pathways, and early trial results suggest that the therapy lasts 6 months and is equally effective in older and younger age groups.
Surgery may be indicated in men in whom incontinence is associated with outflow obstruction. Men may have either chronic urinary retention, or acute urinary retention that is precipitated (use of an anticholinergic drug, recent instrumentation, or alpha agonist drug) or spontaneous. Those who have had complete acute urinary retention requiring mechanical drainage, and particularly those with spontaneous retention, are likely to have another
episode within a short period of time and should be evaluated for a prostatic resection, as should men with incontinence associated with enough residual urine to be causing recurrent symptomatic infections or hydronephrosis. In men who do not meet these criteria, the decision should be based on weighing carefully the degree to which the symptoms bother the patient, the potential benefits of surgery (obstructive symptoms often respond better than irritative symptoms), and the risks of surgery (which may be minimal with newer prostate resection techniques).
MECHANICAL DEVICES, UNDERGARMENTS, CATHETERS, AND OTHER SUPPORTS
Three basic types of catheters and catheterization procedures are used for the management of urinary incontinence: external catheters, intermittent straight (“in-and-out”) catheterization, and chronic indwelling catheterization.
External catheters for men generally consist of some type of condom or adhesive connecting the penis to a drainage system. Improvements in design and observance of proper procedure and skin care when applying the catheter decrease the risk of skin irritation, as well as the frequency with which the catheter falls off. Existing data suggest that patients with condom catheters are at increased risk of developing symptomatic infection compared to incontinent adults depending upon absorbent pads or diapers. External catheters should be used only to manage intractable incontinence in male patients who do not have urinary retention and who are extremely physically dependent or pursuing a palliative approach to care. An external catheter that fits over the urethra for use in female patients is available commercially, but presently is not widely used.
Intermittent catheterization can help in the management of patients with urinary retention and overflow incontinence. The procedure can be carried out by either the patient or a caregiver and involves straight catheterization two to four or more times daily, depending on residual urine volumes. The goal is to keep residual urine volume generally less than approximately 300 to 400 mL. In the home setting, the catheter should be kept clean (but not necessarily sterile). Studies conducted largely among younger paraplegic patients show that this technique is practical and reduces the risk of symptomatic infection compared with the risk associated with chronic catheterization. The technique may be especially useful following removal of
an indwelling catheter in a bladder-retraining protocol (see Table 47-11). However, older nursing home residents, especially men, may be difficult to catheterize. Anatomic abnormalities commonly found in the lower urinary tracts of older adults may increase the risk of infection because of repeated straight catheterizations. In addition, using this technique in an institutional setting, which may have an abundance of organisms relatively resistant to many commonly used antimicrobial agents, may yield an unacceptable risk of nosocomial infections. Using sterile catheter trays for these procedures would be very expensive. Thus, it may be extremely difficult to implement such a program in a typical nursing home setting.
Chronic indwelling bladder catheterization has higher risks than other forms of management, and should be sparingly used and only as indicated. On occasion, experts may recommend suprapubic catheters over urethral catheters for chronic use. Chronic bladder catheterization over months to years, has been shown to increase the incidence of chronic bacteriuria, bladder stones, periurethral abscesses (urethral catheters), urethral injury including penile erosion (urethral catheters), and even bladder cancer. Older nursing home residents managed with chronic indwelling bladder catheters frequently are at a higher risk of developing symptomatic infections. The limited evidence available to date does not suggest that routine changing of indwelling bladder catheters is warranted, though this is a common practice. Given these risks, it seems appropriate to recommend that the use of chronic indwelling bladder catheters be limited to certain specific situations (Table 47-13). When indwelling catheterization is used, certain principles of catheter care should be observed in an attempt to minimize complications (Table 47-14). In situations where there is reduced urinary output, increased leakage around the catheter, or increased pain, it is important to perform an abdominal and genital examination to make certain the catheter is in the bladder and not obstructed. In these situations, it is often necessary to replace the catheter.
TABLE 47-13 ■ INDICATIONS FOR CHRONIC INDWELLING CATHETER USE
TABLE 47-14 ■ KEY PRINCIPLES OF CHRONIC INDWELLING CATHETER CARE
In men with post-prostatectomy urinary incontinence who are not a candidate for or do not desire surgical therapy, an external penile clamp for compression of the urethra may be a useful adjunctive therapy. Patients must
be able to remember, monitor, and physically be able to remove the clamp every 2 hours. Some women with urinary incontinence and pelvic prolapse may respond well to use of a vaginal pessary, which is a device to slow the progression of prolapse by adding support to the vagina and increasing tightness of the tissues and muscles of the pelvis. Pessaries are made of rubber, plastic, or silicone, and come in a variety of types. Often patients need to be individually fitted with the device.
There are a variety of available absorbent products and undergarments that can help patients contain leakage, including disposable inserts, reusable and single-use adult diapers, and disposable underwear. Additionally, there are pads that can protect beds and/or chairs. Criteria for success of these devices revolve around fit, odor control, cost, and ability to hold urine. More frequent changing of pads is expensive and inconvenient, but frequently helps control odor. Less frequent changing leaves skin wetter and likely more vulnerable to friction and abrasion.
In general, older adults want general information on urinary incontinence and sources of help. There are multiple consumer advocacy groups for those with incontinence that are dedicated to improving the lives of patients with urinary incontinence, for example, the National Association for Continence (www.nafc.org), and the Simon Foundation (www.simonfoundation.org) in the United States, that provide educational materials, reviews of available products, and links to researchers and manufacturers who provide incontinence materials.
FECAL INCONTINENCE
Fecal incontinence is less common than urinary incontinence. Its occurrence is relatively unusual in older patients who are continent of urine. Thirty to fifty percent of older patients in institutional settings with frequent urinary incontinence, however, also have episodes of fecal incontinence. This coexistence suggests common pathophysiologic mechanisms.
Defecation, like urination, is a physiologic process that involves smooth and striated muscles, central and peripheral innervation, coordination of reflex responses, mental awareness, and physical ability to get to a toilet.
Disruption of any of these factors can lead to fecal incontinence.
The most common causes of fecal incontinence are problems with constipation and laxative use, neurologic disorders, and colorectal disorders
(Table 47-15). In patients who are fed by enteral tubes, hyperosmotic feedings can precipitate diarrhea and fecal incontinence. Diluting the feedings or using slow continuous infusion is sometimes helpful.
Constipation is extremely common in older persons and, when chronic, can lead to fecal impaction and incontinence. Hard stool in a fecal impaction irritates the rectum and results in the production of mucus and fluid. This fluid leaks around the mass of impacted stool and precipitates incontinence. Constipation is difficult to define; technically, it indicates fewer than three bowel movements per week, although many patients use the term to describe difficult passage of hard stools or a feeling of incomplete evacuation. Poor dietary and toilet habits, immobility, and chronic laxative abuse are the most common causes of constipation in older persons. Appropriate management of constipation prevents fecal impaction and resulting fecal incontinence. The management of constipation is discussed thoroughly in Chapter 87.
TABLE 47-15 ■ CAUSES OF FECAL INCONTINENCE
Fecal incontinence is sometimes amenable to biofeedback therapy. For those patients with end-stage dementia, a program of alternating constipating agents (if necessary) and laxatives in a routine schedule (such as giving
regular osmotic laxatives as weekly enemas) may be effective in controlling defecation in many patients with fecal incontinence. Functionally dependent patients should be toileted regularly after a meal to take advantage of, or possibly regain, the gastrocolic reflex. Experience suggests that these measures should permit management of even severely cognitively impaired patients. As a last resort, specially designed incontinence undergarments are sometimes helpful in managing fecal incontinence and preventing skin irritation and other complications.
FURTHER READING
Brown JS, Vittinghoff E, Wyman JF, et al. Urinary incontinence: does it increase risk for falls and fractures? Study of Osteoporotic Fractures Research Group. J Am Geriatr Soc. 2000;48:721–725.
Burgio KL, Kraus SR, Johnson TM II, et al. Combined behavioral and two- drug therapy for lower urinary tract symptoms in men: the COBALT randomized clinical trial. JAMA Intern Med. 2020;180:411–419.
Burgio KL, Locher JL, Goode PS, et al. Behavioral versus drug treatment for urge urinary incontinence in older women: a randomized controlled trial. JAMA. 1998;280: 1995–2000.
Coupland CAC, Hill T, Dening T, et al. Anticholinergic drug exposure and the risk of dementia: a nested case-control study. JAMA Intern Med.
2019; 179: 1084–1093.
Fowler CJ, Griffiths DJ. A decade of functional brain imaging applied to bladder control. Neurourol Urodyn. 2010;29:49–55.
Gibson W, Johnson T II, Kirschner-Hermanns R, et al. Incontinence in frail elderly persons: report from the 6th International Consultation on Incontinence. Neurourol Urodyn. 2021;40(1):38–54.
Goode PS, Burgio KL, Locher JL, et al. Effect of behavioral training with or without pelvic floor electrical stimulation on stress incontinence in women: a randomized controlled trial. JAMA. 2003;290:345–352.
Guralnick ML, Fritel X, Tarcan T, et al. ICS Educational Module: Cough stress test in the evaluation of female urinary incontinence: introducing the ICS-Uniform Cough Stress Test. Neurourol Urodyn. 2018;37:1849– 1855.
Hendrix SL, Cochrane BB, Nygaard IE, et al. Effects of estrogen with and without progestin on urinary incontinence. JAMA. 2005;293:935–948.
Kane RL, Ouslander JG, Resnick B, Malone ML. Essentials of Clinical Geriatrics. 8th ed. New York, NY: McGraw-Hill; 2018.
Lightner DJ, Gomelsky A, Souter L, Vasavada SP. Diagnosis and treatment of overactive bladder (non-neurogenic) in adults: AUA/SUFU Guideline. J Urol. 2019;202:558–563.
McVary KT, Roehrborn CG, Avins AL, et al. Update on AUA guideline on the management of benign prostatic hyperplasia. J Urol.
2011;185(5):1793–1803.
NICE treatment guidelines (NG 123) for Urinary Incontinence and pelvic organ prolapse in women. www.nice.org.uk/guidance/ng123. Last updated 24 June 2019. Accessed January 31, 2021.
Ouslander JG. Management of overactive bladder. N Engl J Med.
2004;350:786–799.
Subak LL, Wing R, West DS, et al. Weight loss to treat urinary incontinence in overweight and obese women. N Engl J Med. 2009;360:481–490.
Thom DH. Variation in estimates of urinary incontinence prevalence in the community: effects of differences in definition, population characteristics, and study type. J Am Geriatr Soc. 1998;46:473–480.
Vaughan CP, Markland AD. Urinary incontinence in women. Ann Intern Med.
2020;172:ITC17–ITC32.
Chapter
Elder Mistreatment
Mark S. Lachs, Tony Rosen
DEFINITIONS
In the broadest context, elder mistreatment subsumes a variety of activities perpetrated upon an older person by others. There is as yet no universally agreed definition or classification of elder mistreatment. Proposed strategies for defining or classifying elder mistreatment have included using the type of abuse (eg, physical vs verbal abuse), motive (eg, intentional vs unintentional neglect), perpetrator relationship (eg, family vs paid caregiver), and setting (eg, community vs nursing home). Nonetheless, the clinician attempting to care for a victimized older person or to understand the spectrum of elder mistreatment will encounter several thematically similar definitions. For example, the Older Americans Act of 1975 defines elder abuse as “the willful infliction of pain, injury, or mental anguish.” This definition has been adopted, and/or modified, by many state protective service agencies that investigate cases of abuse. An encompassing definition created by a 2002 expert panel convened by the National Academy of Sciences added the concept that elder mistreatment involves a trusting relationship between an older person and another individual in which that trust is violated in some way. The definition that likely best captures current understanding of elder mistreatment is that developed for the 2014 Elder Justice Roadmap, a report prepared by a large, multidisciplinary team of stakeholders inside and outside the US government:
Physical, sexual, or psychological abuse, as well as neglect, abandonment, and financial exploitation of an older person by another person or entity
That occurs in any setting (eg, home, community, or facility)
Either in a relationship where there is an expectation of trust and/or when an older person is targeted based on age or disability
Table 48-1 lists representative examples of types of elder mistreatment.
Whatever definition is employed, a consistent and important feature of elder mistreatment, and other forms of family violence, is that multiple types of mistreatment, such as physical and verbal abuse, neglect, and financial exploitation frequently coexist in the same abuser-victim dyad.
TABLE 48-1 ■ REPRESENTATIVE DEFINITIONS OF ELDER MISTREATMENT
Learning Objectives
Elder mistreatment, including physical abuse, sexual abuse, neglect, financial exploitation, psychological abuse, and abandonment, is common and may have serious medical and social consequences.
Elder mistreatment is significantly underrecognized by clinicians and underreported to the authorities.
Key Clinical Point
1. Though researchers have described potential risk factors and are working to identify forensic markers associated with elder mistreatment, a high index of clinical suspicion in all encounters with geriatric patients and routine screening are currently the best tools physicians have to identify this often subtle geriatric syndrome.
Virtually all experts, clinicians, and reasonable laypersons will agree that egregious instances of physical violence such as punching, hitting, slapping, or assaulting an older person with a gun or other weapon are elder abuse. The most contentious definitional (and clinical) area relates to elder neglect, because the term neglect immediately implies that a caregiving obligation—such as providing food, medicines, or care—has not been met. This, in turn, raises difficult questions that must, with clinical judgment and experience, be considered in the context of the older adult’s environment. For example, what are reasonable community standards for the frequency of bathing an assaultive spouse with Alzheimer disease? Does that standard change if the designated caregiver also suffers from chronic diseases that preclude perfect hygiene for their impaired family member? What if this inadequate care enables the “victim” to live at home long after other families would have considered nursing home placement? Who exactly is the responsible caretaker, especially when multiple adult children are available to assume that role, but only one has “stepped up” because of birth order or some other arbitrary circumstance? And is it fair to label that adult child an “elder neglector” when caregiving becomes physically or psychologically impossible?
These difficult questions also highlight the fact that clinicians caring for elder abuse victims often find themselves working closely with alleged perpetrators of abuse and neglect, as these individuals are often the primary caregivers.
Another challenge in conceptualizing and defining elder mistreatment is how to include adults who have been victims of family violence chronically during their adult lives. Do these victims become elder mistreatment sufferers after they reach an arbitrary age cutoff or if they become functionally dependent or suffer cognitive decline?
EPIDEMIOLOGY
However it is defined, elder mistreatment is common. Recent prevalence studies, conducted in different countries, suggest that as many as 10% of community-dwelling older adults suffer from abuse, neglect, or exploitation each year. Multiple smaller studies suggest that nearly 50% of dementia sufferers are victims of mistreatment by caregivers. Psychological/emotional abuse (4.6%–13%), financial mistreatment (3.5%–6.6%), and neglect (5.1%–5.4%) are most commonly reported, with physical mistreatment (0.2%–2.1%) and sexual abuse (0.3%–0.6%) reported less frequently.
Unfortunately, despite its frequency, research suggests that as few as 1 in 24 cases of elder mistreatment is identified by the authorities. Victims may be unable to report abuse due to isolation, severe illness, or dementia, or may be reluctant to report due to fear of reprisal, guilt, desire to protect the abuser, cultural beliefs, or fear of institutionalization. Many older adults who suffer from abuse endure it for years before having it discovered. For others, it may not be until after they have died that their morbidity and early death are considered to be due to abuse. Both of these scenarios lead to delays in identification and intervention.
Much research has focused on identifying risk factors for elder mistreatment perpetrators and victims, with inconsistent results (Table 48-2). The most consistent findings relate to the relationship of the abuser to victim; most studies report that spouses and adult children are the most common perpetrators. Studies also show that when adult children are the abusers, sons and daughters are often equally implicated. At least one study found daughters to be the more common abuser. These findings must be viewed cautiously in that women are far more likely to be the de facto or designated care providers to frail older adults and are, therefore, more “at risk” for being accused of mistreatment should caregiving fall short of any arbitrary standard.
TABLE 48-2 ■ POSSIBLE RISK FACTORS FOR ELDER MISTREATMENT
Studies suggest that women represent two-thirds of elder mistreatment victims. Also, in cultures where women have inferior social status, older women are at high risk of neglect through abandonment when they are widowed, and their property is seized. While current research suggests that older adults with lower socioeconomic status are at greater risk for mistreatment, studies evaluating race and ethnicity have been contradictory. A particularly contentious area has been spawned by the “dependency theory” of mistreatment, which holds that mistreatment occurs when the victim becomes inordinately dependent on the caregiver for a variety of medical and nonmedical needs. Again, studies show an inconsistent relationship between functional disability in an older adult and elder mistreatment. In fact, a more consistent finding in the literature is the converse—the perpetrator is often dependent on the older adult victim for financial support and housing. Characteristically, an adult child unable to achieve independence is reliant on the older person for these needs.
Inconsistencies in findings from risk factor research may be related to the heterogeneous nature of elder mistreatment cases. Older adult protective services workers and clinicians experienced in elder mistreatment know that the term elder mistreatment subsumes many situations—abusive spousal relationships that have “aged,” caregivers to dementia patients who lash out
in frustration, and physically abusive adult children with poorly managed mental health or substance abuse problems are but a few examples.
Epidemiologic studies that attempt to discern risk factors without acknowledging this reality probably are measuring an “average” effect, thus possibly missing important sets of risk factors among subgroups of abused or neglected older populations.
Whatever risk factors are identified in previous and future research should not foster a complacency wherein an absence or paucity of such factors causes the clinician to lower his or her guard. Elder mistreatment crosses all ethnic and socioeconomic boundaries. A high index of clinical suspicion is critical for identification.
PATHOPHYSIOLOGY
Theories of elder mistreatment abound; three deserve detailed discussion here, because they may have clinical relevance with regard to the types of interventions contemplated in confirmed cases of mistreatment. The most commonly cited theory contends that family violence is a learned behavior; abused children grow up to potentially abuse not only their children, but also perhaps spouses and their parents. This is sometimes referred to as the transgenerational violence theory of mistreatment.
The dependency theory of mistreatment holds that abuse is fostered by situations in which victims have a degree of functional and/or cognitive disability that results in activities of daily living impairment and overwhelming care needs. Closely associated with this paradigm is another theory—that of the “stressed caregiver.”
The psychopathology of the abuser theory shifts focus away from the victim and argues that elder mistreatment is firmly rooted in mental health problems of the abuser. Examples include personality disorders, poorly or undertreated schizophrenia, alcoholism, and other substance abuse problems.
Discerning the underlying causes of elder mistreatment is essential in fashioning an intervention plan (see “Management” later in this chapter).
PRESENTATION
For a variety of reasons, the identification of elder mistreatment is one of the most difficult clinical challenges in geriatric medicine. First, many highly prevalent chronic diseases in older adults may have clinical manifestations
that mimic abuse. If elder abuse is present, the clinician may ascribe those findings to chronic disease rather than family violence. Conversely, the clinician may erroneously attribute findings from another disease to elder mistreatment. Second, the setting in which an elder mistreatment evaluation occurs is often quite challenging. The assessment may be hurried (eg, the emergency department is the often-chaotic environment where acute injuries are evaluated). The presence of the suspected abuser only adds pressure to what is already likely to be a stressful encounter. The perpetrator, and also the victim, may have incentives to actively conceal the mistreatment from the clinician. Lastly, the competent identification and management of mistreatment may create significant additional work and propel the clinician into a world that he or she is likely to be unfamiliar with—a world that includes mandatory reporting statutes, adult protective service workers, and a criminal justice system with a vocabulary that is foreign to many medical professionals. Given these educational, emotional, and systemic obstacles, it is not surprising that elder mistreatment often is missed or unreported in the context of “customary care.” In fact, research suggests that only 1.4% of cases reported to Adult Protective Services (APS) come from physicians, and, in a survey of APS workers, of 17 occupational groups, physicians were among the least helpful in reporting abuse.
Elder abuse forensics is a recent area of intense and growing focus. Of particular interest has been whether there are diagnostic and/or clinical signs and symptoms of abuse presentations, either during life or at autopsy, that are highly specific for elder mistreatment, as have been identified in child abuse (eg, shaken baby syndrome and bucket-handle metaphyseal fracture).
Researchers have begun to search for physical injury patterns associated with abuse. Research has shown that physical abuse and assault-related injuries most commonly occur on the head/face, neck, and upper extremities. One study showed that, in comparison to other older adults, victims of physical abuse have bruises that are more often large (> 5 cm) and more commonly on the face, lateral right arm, or posterior torso. Physical abuse victims are much more likely than older adults presenting to the emergency department after a fall to have injuries to the left cheek/zygoma, neck, or ears. Additionally, physical elder abuse victims are more likely to have maxillofacial/dental/neck injuries combined with no injuries or their upper or lower extremity injuries. Medical and laboratory markers potentially suggestive of elder mistreatment have been suggested by experts, including
malnutrition, dehydration, alterations in status of chronic illness, hypothermia/hyperthermia, rhabdomyolysis, and toxicologic findings, but these have not yet been systematically evaluated.
Until rigorous research identifies reliable forensic markers, clinicians need to consider elder mistreatment in the differential diagnosis of many or most of the clinical presentations they encounter. Fractures may result from osteoporosis or force or both. Depression may be related to neurotransmitter imbalances or a hopeless abusive environment. Malnutrition may be the result of any number of chronic illnesses inexorably worsening, or from the withholding of sustenance.
Dramatic injuries or neglect pose no particular diagnostic challenge. Fractures, burns, contusions, and lacerations, in concert with a credible history, immediately lead to the diagnosis. At the other extreme, subtle presentations that mimic chronic disease are highly challenging. Examples include chronic diseases that frequently decompensate despite a care plan and adequate resources (eg, repeated emergency department visits for congestive heart failure or chronic obstructive pulmonary disease exacerbation). Indeed, because elder mistreatment can be defined so broadly, there are very few presenting signs or symptoms in the geriatric patient for which elder mistreatment is not in the differential diagnosis.
Many instruments have been devised for the screening or evaluation of elder mistreatment, but they are not applicable to all settings. The Elder Abuse Suspicion Index (EASI) is a short screening instrument that has been validated for cognitively intact patients in a primary setting with a sensitivity of 0.47 and a specificity of 0.75. The EASI (Figure 48-1) is a tool to identify older adults at risk of mistreatment and includes five questions for the patient and one for the physician, with more than or equal to one “yes” response suggesting further assessment is needed.
FIGURE 48-1. Elder Abuse Suspicion Index (EASI). (Reproduced with permission from Yaffe MJ, Wolfson C, Lithwick M, et al. Development and validation of a tool to improve physician identification of elder abuse: the Elder Abuse Suspicion Index (EASI). J Elder Abuse Negl.
2008;20[3]:276–300.)
All screening instruments may assist the clinician by serving as “checklists” to ensure a thorough evaluation. Table 48-3 suggests a system- by-system approach. The importance of heightened awareness cannot be overemphasized in considering the diagnosis. Frequently, clues about potential mistreatment come from ancillary staff members (eg, office reception staff) or home care nurses who observe the abuser-victim dyad away from the health care provider. A general sense that something is amiss in the patient’s environment such as caustic interaction between parties, poor hygiene or dress, frequently missed medical appointments, or failure to adhere with a clearly designated treatment strategy can all be important clues.
TABLE 48-3 ■ CLINICAL MANIFESTATIONS OF POTENTIAL MISTREATMENT WITH RECOMMENDED ASSESSMENT
The patient and the alleged perpetrator should be interviewed separately and alone. Although there is an emerging consensus that patients of all ages should be routinely screened for family violence, an optimal strategy or instrument has not emerged. Patients should be asked candidly and calmly about the etiology of any unexplained injuries or other findings. Often patients are at first unwilling to speak candidly about being an elder abuse victim for reasons of embarrassment, shame, or fear of retribution from the perpetrator who is frequently a caregiver.
Interview of the suspected abuser is a tricky and potentially dangerous undertaking. On the one extreme, elder abusers who are presented with an empathetic, nonjudgmental ear to describe their stresses and actions will sometimes describe their situations at great length and in great detail. On the other hand, all forms of domestic abuse share a pattern wherein abusers gain and control access to their victims. An elder abuser graphically confronted with allegations of mistreatment may move to sequester a frail victim in such a way that a frail isolated older adult loses access to critically needed medical and social services. Whenever possible, assistance from providers skilled in elder abuse evaluation and management should be enlisted to assist in such undertakings.
MANAGEMENT
Elder abuse and neglect are morbid and mortal. Mistreatment is associated with adverse health outcomes for victims, including significant increases in emergency department usage, hospitalization, dementia, depression, and nursing home placement. Also, research has shown that mistreatment victims have a threefold risk of death compared to nonabused controls. Thus, intervention is critical.
Unfortunately, there are no randomized trials of reasonable quality addressing interventions for elder mistreatment. The clinician confronted with a confirmed case of elder abuse is best served by a resourceful approach that combines experience, clinical judgment, and local resources. One developing trend is large interdisciplinary groups who convene regularly to discuss cases of mistreatment, not only for the purpose of planning intervention, but also to consider case-by-case forensics, cross train disciplines, and provide general support to one another in this difficult field. However, there is no evidence-based evaluation of this strategy.
Whatever the approach to intervention, a dogmatic or algorithmic strategy to address all elder mistreatment cases is likely misguided. A rigid, inflexible approach ignores the enormous heterogeneity of the entity, including the type(s) of mistreatment being concurrently perpetrated, the underlying mechanisms, patient comorbidities, caregiver burden issues, and the available resources (both familial and community) that can be brought to bear on the issue. A more sensible approach may be the multipronged strategy increasingly used to treat other geriatric syndromes that have multifactorial etiologies. The paradigm may be a useful one in that elder
mistreatment can be likened to geriatric syndromes. That is, there may be multiple “host” and environmental contributors; decompensation may be accelerated by other medical and social problems; and some of the contributors may be more remediable than others. The elder physical abuse victim with severe chronic obstructive pulmonary disease and an abusive schizophrenic child-caregiver will need an entirely different series of interventions than the spouse with progressive dementia who has suffered lifelong domestic violence that is now worsening.
The first step in confronting any confirmed case of family violence is ensuring the safety of the victim. First, the immediate threat of danger to the victim should be ascertained. Even if there is no immediate threat, a safety plan is critical in the management of all forms of family violence (Table 48- 4). What are the specific steps the victim should take if the perpetrator of mistreatment becomes acutely violent? Options include calling the local police department, accessing shelters, emergency department use/hospital admission, or respite care in some evolved systems of long-term care.
Additionally, in most states, cases of elder abuse must be reported to adult protective services agencies. This typically results in a home visit to adjudicate the veracity of such a report. State protective service agencies vary widely with respect to their caseloads and available resources; ideally a coordinated approach that brings to bear their expertise and resources in collaboration with the physician and multidisciplinary team produces the best response.
TABLE 48-4 ■ SAFETY PLAN FOR VICTIMS OF ELDER MISTREATMENT OR OTHER VICTIMS OF FAMILY VIOLENCE WITH CAPACITY WHO INSIST ON REMAINING IN AN ABUSIVE ENVIRONMENT
The safety plan paradigm will have limited utility in many cases of elder mistreatment, however, because of victim frailty and/or cognitive impairment that limits the use of self-protective behaviors. Frequently, clinicians find themselves in the predicament of caring for an elder mistreatment victim who lacks capacity. In these cases, the likely intervention will involve the appointment of a guardian in collaboration with adult protective service agencies or other elder social service programs in the community that serve such functions. In such a proceeding, the clinician’s role is to provide objective evidence that documents the lack of decision-making capacity. The clinician may also have a role in ensuring the alleged perpetrator of mistreatment does not become the guardian.
One of the most frustrating situations for professionals working with victims of family violence is the individual who retains decision-making capacity and insists on remaining in an abusive environment. Here the clinician’s role is to educate the patient about the tendency of family violence to escalate and to review the safety plan created. The clinician should also explain to the patient that even if services are refused, the physician remains an important and available resource, should the situation change.
In general, the physician who suspects elder abuse would do well to employ the same creative strategies he or she uses to manage a variety of clinical problems in older adults. There may be local social services agencies in the community who provide an array of services such as meals on wheels or friendly visit. These services could represent a new resource for the patient, but also additional “eyes and ears” to ascertain what the home
situation is like. A local adult day care referral might also enable a more detailed ongoing evaluation of a client while decompressing a stressful caregiving situation. A financial management program for the patient with cognitive impairment can shed light on the possibility of financial exploitation. The physician need not diagnose elder abuse while these useful services are being proffered.
Physicians should strive to provide trauma-informed care to potential elder abuse and neglect victims. This involves recognizing and being sensitive to the deep impact of stressful and traumatic experiences on a patient’s physical and mental health. Elder abuse or neglect, which may occur every day for years, may cause depression, anxiety, or post-traumatic stress disorder, as may other life experiences. Practicing trauma-informed care involves maximizing a victim’s choice and control and trying to minimize re-traumatization through treatment. Delivering trauma-informed care includes specific strategies: emphasizing the intention to maintain a patient’s privacy and confidentiality, asking permission before touching a potential victim, limiting how much a victim has to talk about the mistreatment, and avoiding words such as violence/abuse/neglect/mistreatment/criminal behavior if the victim does not initially conceive of what has occurred in this way. Physicians should also use a trauma-informed approach when treating cognitively impaired patients, as they may also be profoundly impacted by current or previous traumatic exposures.
SPECIAL SITUATIONS
Elder mistreatment may also occur in institutional settings. The physician and nurse have roles in detecting these cases as well. Substantial regulatory safeguards have been progressively enacted since the 1970s to protect residents of long-term care facilities. These safeguards include mandatory criminal background checks of all employees, ombudsman programs to adjudicate complaints of mistreatment, and components of the Omnibus Budget Reconciliation Act of 1987, which includes residents’ rights provisions (eg, minimization of restraints). In some contexts, the failure to create or follow a reasonable plan of care for the long-term care residents may be viewed as abusive or neglectful.
While the focus of elder abuse in long-term care has been on staff abuse of residents, this is probably far less common than previously when
regulatory scrutiny was lacking. Recently, resident on resident abuse has been identified as a far more common and pervasive problem among nursing home residents. This includes verbal, physical, and sexual mistreatment.
Although there are no prevalence data on the phenomenon, preliminary and indirect evidence suggests that it is highly prevalent. For example, more than 50% of nursing aides in long-term care report the personal experience of being physically hit by a resident in the previous year, typically in the course of providing direct care. Given that the prevalence of dementia and associated behavioral disturbance in long-term care facilities is high, it stands to reason that behavior of this type occurs frequently between residents.
Another recent area of interest has been abuse and neglect that occurs in long-term care environments other than nursing homes (eg, assisted living and board and care environments), because these facilities generally are under considerably less regulation. Interest in mistreatment in assisted-living facilities has also grown in recent years as much sicker patients begin to inhabit these institutions; many believe the higher acuity and generally lower levels of staff and supervision are a dangerous admixture in which abuse and neglect are more likely to occur. Data are lacking on the prevalence of abuse, or on the type of abusers, in these settings.
Physicians and other care providers have an important role in the detection of these institutional cases, because they may see potential manifestations of nursing home elder mistreatment in facilities or emergency departments as part of providing customary care. Physicians who suspect institutional abuse have an obligation to immediately report their suspicions to the long-term care ombudsman in their state.
SUMMARY
Elder mistreatment is a prevalent problem with many potential manifestations. The epidemiology of injuries and other clinical findings is not completely understood, but this does not preclude the clinician from taking an active role in its detection and management. Studies show elder mistreatment victims to be at substantial independent risk of death and quality-of-life decline. The syndrome should be afforded the same vigilance that health care providers devote to other “traditional” medical problems in geriatrics.
Acierno R, Hernandez MA, Amstadter AB, et al. Prevalence and correlates of emotional, physical, sexual, and financial abuse and potential neglect in the United States: the National Elder Mistreatment Study. Am J Public Health. 2010;100:292–297.
Bonnie J, Wallace RB, eds. Elder Mistreatment: Abuse, Neglect, and Exploitation in an Aging America. Washington, DC: National Academy of Sciences Press; 2003.
Dong XQ. Elder abuse: systematic review and implications for practice. J Am Geriatr Soc. 2015;63(6):1214–1238.
Lachs MS, Pillemer KA. Elder abuse. Lancet. 2004;304: 1236–1272. Lachs MS, Pillemer KA. Elder abuse. N Engl J Med. 2015; 373(20):1947–
1956.
Lachs MS, Teresi JA, Ramirez M, et al. The prevalence of resident-to- resident elder mistreatment in nursing homes. Ann Intern Med.
2016;165(4):229–236.
Lachs MS, Williams C, O’Brien S, et al. Risk factors for reported elder abuse and neglect: a nine-year observational study. Gerontologist. 1997;37:469–474.
Lachs MS, Williams CS, O’Brien S, et al. The mortality of elder mistreatment. JAMA. 1998;280:428–432.
Laumann EO, Leitsch SA, Waite LJ. Elder mistreatment in the United States: prevalence estimates from a nationally representative study. J Gerontol B Psychol Sci Soc Sci. 2008;63(4):S248–S250.
Mosqueda L, Burnight K, Gironda MW, Moore AA, Robinson J, Olsen B. The abuse intervention model: a pragmatic approach to intervention for elder mistreatment. J Am Geriatr Soc. 2016;64(9):1879–1883.
National Center for Elder Abuse. The Elder Justice Roadmap: A
Stakeholder Initiative to Respond to an Emerging Health, Justice, Financial, and Social Crisis.
https://www.justice.gov/file/852856/download. Accessed June 3, 2021.
Pillemer K, Burnes D, Riffin C, Lachs MS. Elder abuse: global situation, risk factors, and prevention strategies. Gerontologist. 2016;56(Suppl 2):S194–205.
Pillemer K, Finkelhor D. The prevalence of elder abuse: a random sample survey. Gerontologist. 1988;28:51–57.
FURTHER READING
Rosen T, LoFaso VM, Bloemen EM, et al. Identifying injury patterns associated with physical elder abuse: analysis of legally adjudicated cases. Ann Emerg Med. 2020;76(3):266–276.
Rosen T, Pillemer K, Lachs M. Resident-to-resident aggression in long-term care facilities: an understudied problem. Aggress Violent Behav.
2008;13(2):77–87.
Under the Radar: New York State Elder Abuse Prevalence Study: Self- Reported Prevalence and Documented Case Surveys 2012. https://ocfs.ny.gov/main/reports/Under%20the%20Radar%2005%2012% 2011%20final%20report.pdf. Accessed June 3, 2021.
Wiglesworth A, Austin R, Corona M, et al. Bruising as a marker of physical elder abuse. J Am Geriatr Soc. 2009; 57(7):1191–1196.
Wiglesworth A, Mosqueda L, Burnight K, et al. Findings from an elder abuse forensic center. Gerontologist. 2006; 46:277–283.
Yaffe MJ, Wolfson C, Lithwick M, Weiss D. Development and validation of a tool to improve physician identification of elder abuse: the Elder Abuse Suspicion Index (EASI). J Elder Abuse Negl. 2008;20:276–300.
Muscle Aging and Sarcopenia
Alfonso J. Cruz-Jentoft
INTRODUCTION
Age-related losses of muscle mass and strength are common and can lead to sarcopenia, a condition typically consisting of a combination of loss of strength, physical function, and muscle mass. This chapter will cover concepts related to the process of muscle aging as well as the current status of sarcopenia detection, evaluation, and management.
The human body is made up of more than 600 skeletal muscles, accounting for around 40% of the total body mass. Excluding water, muscles are composed of about 80% protein, or about 50% of total body protein. The main functions of skeletal muscle are mobility and regulation of proteins.
Adults tend to lose muscle mass at a rate of about 8% per decade after age
40. At age 70, an adult will have lost a mean of 24% of the muscle mass present at age 30. The rate of muscle mass loss accelerates and almost doubles after age 70. It is essential to note that adults lose muscle strength much faster; about 3% to 4% per year after age 50. Strength loss renders older persons vulnerable to physical disability.
MOTOR UNITS AND THEIR REGULATION
The basic functional unit of skeletal muscles is the motor unit. Each motor unit consists of a neuron, its axon, and the muscle fibers innervated by that neuron. The neuron terminal is connected to the muscle fiber through the
Mobility
SECTION B
neuromuscular junction, where neurons release neurotransmitters that bind to muscle cells receptors. A single motor neuron may innervate from a few to thousands of muscle fibers, depending on the muscle, with neurons responsible for higher force production innervating a higher number of fibers. Human muscles harbor three types of muscle fibers: type I, type IIa, and type IIx (formerly named IIb). Type I fibers (slow-twitch) are well adapted to perform aerobic exercise and are highly resistant to fatigue, having high oxidative capacity and a low capacity to generate adenosine triphosphate (ATP). Type IIx fibers (fast-twitch) contract several times faster but fatigue rapidly, adapted to brief, intense contractions (as in weight lifting) and have a high ATP-generating capacity and low glycolytic capacity. Type IIa fibers (also fast-twitch) are also fast but are fatigue-resistant (Table 49- 1). Muscle myofibrils are made of two main contractile proteins actin and myosin, with a double helix structure, regulated by troponin and tropomyosin.
TABLE 49-1 ■ CHARACTERISTICS OF MUSCLE FIBERS
Learning Objectives
Summarize age-related changes in the skeletal muscle and motor units.
Appraise the evolving concept of sarcopenia and the characteristics of the different definitions proposed in the past two decades.
Use muscle mass and function to diagnose sarcopenia in clinical practice.
Develop a treatment plan for sarcopenic patients.
Key Clinical Points
Muscle aging involves anatomical and physiological changes in motor units and their regulation.
Sarcopenia is a progressive and generalized disease involving the accelerated loss of muscle mass and function. The concept of sarcopenia has evolved from low muscle mass alone to include muscle function.
Around 10% of the older persons living in the community suffer from sarcopenia, with a higher prevalence in other clinical settings.
Measures of muscle mass, muscle strength, and physical performance are used to diagnose sarcopenia in clinical practice.
Treatment of sarcopenia requires resistance exercise. Nutrition may have a role. No drugs are yet available for this condition.
The nervous system regulates recruitment of motor units to carry out movements by generating action potentials in the motor cortex of the brain, that propagate down to the neuromuscular junction, triggering contraction of muscle fibers in a highly energy dependent process. The motor cortex is regulated by many other brain regions.
MUSCLE AGING
Age-related changes have been described at all levels of the motor unit. With increasing age the number of muscle fibers is reduced, reflected in a loss of almost one-third of muscle mass in late life. This loss seems to be faster in type II than in type I muscle fibers. Compared to younger adults, more than one-third of motor neurons are typically lost in older adults. By age 70, the number of motor units is reduced about 40% compared with the number found in 25-year-olds. Continued loss with age after age 70 can lead to nonagenarians having only one-third of the number of motor units of young persons, even in apparently healthy aging. Apoptosis of motor neurons with
denervation of the innervated muscle fibers leads to muscle weakness. This process has been associated with sarcopenia. An adaptive response to motor neuron loss is motor unit remodeling in a denervation-innervation cycle, where the same nerve fiber reinnervates larger bundles of muscle fibers to counteract neuron loss (Figure 49-1). This process preferentially involves fast fibers, which may gradually become slower and thus reduce muscle force and muscle power (the ability to use full force in a given period of time).
FIGURE 49-1. Motor unit remodeling in a denervation-innervation cycle.
The firing capacity of motor units is also reduced with age during submaximal and maximal intensity contractions, a phenomenon that has been linked to motor unit remodeling, by shifting the fastest firing fibers to a slower phenotype. The consequence is that older persons need to increase
the number of activated fibers and come closer to maximal effort to perform usual tasks such as climbing stairs or rising from a chair. The control of muscle contraction and movement also depends on other integrative neural mechanisms that show age-related changes, including neural excitability and brain connectivity.
The magnitude of change in muscle size and function is strongly associated with long-term lifestyle, especially with the degree of physical activity and the amount of endurance exercise. This was first demonstrated in animal models, where long-term exercise is associated with reduced loss of motor neurons and muscle fibers compared to sedentary animals. Studies in runners have shown that at least part of the age-related motor unit loss may be prevented or retarded by usual physical exercise, perhaps via an increase of the reinnervation capacity of denervated units. A low protein intake over long periods of time is also associated with reduced muscle mass loss and may help upregulate the balance between muscle protein anabolism and catabolism. Other genetic and molecular changes involved in age-related changes of the skeletal muscle and the motor unit are still largely the focus of research.
SARCOPENIA: CHANGING DEFINITIONS ALONG TIME
Sarcopenia is the most frequent age-associated disease of the skeletal muscle. It is defined as a progressive and generalized disease involving the accelerated loss of muscle mass and function that is associated with increased likelihood of adverse outcomes including falls, frailty, physical disability, and mortality. It may be considered “skeletal muscle insufficiency,” similar to cardiac or renal insufficiency as a useful clinical construct.
The term “sarcopenia” is derived from the Greek words for “poverty of flesh.” It was first described in the 1980s as an age-related decline in lean body mass affecting mobility, nutritional status, and independence. Initial studies of sarcopenia focused mostly on body composition (for some, sarcopenia is still a synonym of low skeletal muscle mass). However, over the years it has become clear that muscle mass measures or estimations with any available method are poor predictors of clinical outcomes, including physical disability. On the other hand, functional measures (muscle strength
and some measures of physical performance, especially gait speed) were confirmed to be good predictors of disability and other relevant outcomes. Such evidence radically changed the concept of sarcopenia in recent years. This is not surprising; cardiac muscle function is also more relevant than mass, except for diseases linked to reduced distensibility, a property that is not relevant for most skeletal muscles.
Building on this evidence, six expert consensus definitions were published from 2010 to 2014 by different groups and organizations: some working groups of the European Society for Clinical Nutrition (ESPEN), the European Working Group on Sarcopenia in Older People (EWGSOP, nurtured by the European Geriatric Medicine Society and ESPEN), an International Working Group on Sarcopenia (IWGS), a group around the Society on Sarcopenia, Cachexia and Wasting Disorders (SSCWD), the Asian Working Group for Sarcopenia (AWGS), and the US Foundation of the National Institutes of Health (FNIH) (Table 49-2). Although not equivalent, all definitions added functional measures to muscle mass estimates, were widely accepted by the scientific community, and triggered an increase in research that confirmed that defining sarcopenia as low muscle mass plus low muscle function was a relevant clinical construct that predicted outcomes and was potentially amenable to improve with interventions. These advances led to the recognition of sarcopenia as a disease with an ICD-10- CM code in 2016.
TABLE 49-2 ■ PARAMETERS INCLUDED IN DEFINITIONS OF SARCOPENIA AROUND 2010
In recent years, measures and estimations of muscle mass have gradually lost credibility. Methods are unreliable, clear cutoff points could never be agreed upon, and the lack of correlation of muscle mass measures with relevant outcomes was confirmed. This reduced credibility, among other
factors, led to a low uptake of the diagnosis of sarcopenia in clinical practice, missing the opportunity to prevent or reverse some outcomes. Therefore, some working groups decided to update their definitions, highlighting the importance of low muscle function and reducing the role of muscle mass in the diagnosis of sarcopenia, with the declared aim to make the diagnosis more friendly to practitioners.
The first update came from the EWGSOP (EWGSOP2) in 2019. This advanced definition opted to move muscle strength to the first parameter to be measured when sarcopenia is suspected, with muscle mass used as confirmation of the involvement of muscle in strength loss. In fact, the EWGSOP2 proposes the use of the term “suspected sarcopenia” for patients with low muscle strength when muscle mass has not been measured. It also suggests alternative use of measures of muscle quality (both in terms of muscle composition and in the ability of muscle mass volume to deliver strength) to replace muscle mass. For the EWGSOP2, low physical performance (defined as the full body ability to move or endure, as measured with walking or chair standing tests) is a marker of severity of sarcopenia that increases the risk of adverse outcomes. There is some evidence than mild or moderate sarcopenia may respond to some treatments that could be insufficient to tackle severe sarcopenia. This group also includes the concept of case finding (already suggested by the SSCWD group) by clinical symptoms or questionnaires with the aim to improve the detection of sarcopenia in clinical settings.
The AWGS (whose first definition was identical to the EWGSOP with distinct cutoff points for Asian populations) also published an updated version in 2020, where case-finding is also the first diagnostic step, albeit the algorithm, instruments or suspicion depend on the clinical setting (community vs other care settings). In specialized settings, muscle strength would be the first measure, followed by physical performance and muscle mass. However, this definition still requires impairments in either muscle mass and function to define sarcopenia, and in both to consider it severe.
Finally, the FNIH group, now named Sarcopenia Definition and Outcomes Consortium (SDOC), recently published an international expert panel position on 13 statements on the putative components of a sarcopenia definition. They confirmed by new analyses of available epidemiologic databases that low muscle strength is associated with physical performance (slowness) and that both low grip strength and low gait speed independently
predict falls, self-reported mobility limitation, hip fractures, and mortality in community-dwelling older adults. On the other hand, they found again that lean mass measured by DXA was not associated with incident adverse health-related outcomes in community-dwelling older adults with or without adjustment for body size. This group recommends that low grip strength and usual gait speed should be included in the definition of sarcopenia, while low muscle mass (measured by DXA) should not be included. The evolution of definitions of sarcopenia is depicted in Figure 49-2.
FIGURE 49-2. Guide to the evolution of the definition of sarcopenia.
EPIDEMIOLOGY
Sarcopenia is a relatively common condition in old age with adverse effects on current as well as future health. Estimates of disease prevalence and incidence are changing with the availability of more accurate definitions. The prevalence of sarcopenia using definitions that only consider low muscle mass can be as high as 40% in community dwelling older persons. However, when muscle function is also included, estimates of the prevalence of sarcopenia in the community are around 10% to 15%. The incidence of new cases of sarcopenia in older adults may be higher than 3% per year, although this has been less studied. There are no clear differences in the prevalence of sarcopenia in males and females. However, the prevalence may be higher in acutely ill hospitalized patients (especially those with severe chronic diseases, present in > 20% of hospitalized older patients), in post-acute and rehabilitation settings (where prevalence may be > 50%), and in nursing homes (> 40% of residents will be sarcopenic).
Many risk factors for sarcopenia have been described. The association with age is probably related to long-term lifestyle. Protein intake has been shown to be related to muscle mass and function in epidemiological studies.
People in the lowest quartile of protein intake show a higher age-related loss of muscle mass that those in the upper quartile. A slight deficit in protein intake, when sustained for many years, may have a strong impact on muscle mass and function in old age. Low levels of vitamin D, omega 3 fatty acids, and some micronutrients (magnesium, selenium) may also be associated with impaired muscle function, although evidence is still weak. Dietary patterns are also important to muscle health. Good adherence to a Mediterranean diet or having high scores in other measures of healthy eating (usually involving higher intake of fruits, vegetables, fatty fish, or whole grains) are also related to better muscle function and reduced disability than less healthy diets.
Low physical activity and sedentariness are usually considered risk factors for sarcopenia, but evidence is weak, as no robust long-term studies have shown an association between lifetime physical activity and exercise and muscle health. Chronic diseases may also be a risk factor for sarcopenia, especially when they limit mobility or appropriate nutrition in old age. Low- grade inflammation has been shown to be associated with sarcopenia. When inflammation is severe, sarcopenia blends into cachexia. Diabetes mellitus is also an established risk factor of sarcopenia; sarcopenia is more common in persons with diabetes compared to those without. On the other hand, sarcopenia may contribute to the development and progression of diabetes through altered glucose disposal due to low muscle mass and intramuscular adipose tissue accumulation.
Obesity and sarcopenia are also related, in a construct named sarcopenic obesity. Patients with sarcopenic obesity have high adiposity and low lean mass. However, this condition is still ill defined, as some patients may have adequate muscle mass (compared with standard population measures) but have muscle strength that is unable to move an increased total body mass.
There is an initiative under the auspice of ESPEN to propose a definition of sarcopenic obesity that may advance the field, expected to be released in 2022.
Sarcopenia is associated with many adverse health consequences. Many studies have confirmed that mortality is 3 to 4 times higher in sarcopenic compared with non-sarcopenic persons with otherwise similar health conditions, and the impact of sarcopenia on mortality may be even higher in persons older than 80 years. Sarcopenia is also associated with a much higher risk of physical and cognitive frailty, functional decline, and disability. The risk of falls and fractures is also slightly increased in
sarcopenia, although evidence about this association is not yet unequivocal. Sarcopenia also seems to increase the risk of hospitalization, readmission, and impaired health-related quality of life, whether assessed by general instruments or disease-specific instruments. Whether sarcopenia increases the risk of nursing home admission or global health care costs remains to be elucidated.
PATHOPHYSIOLOGY
Processes leading to sarcopenia are complex and interacting. They include anatomical and functional changes imposed on an aging skeletal muscle and nervous system, and also changes in hormonal regulatory mechanisms.
Research is continuing, but it may well be that the phenotype of sarcopenia is produced through different mechanisms in different individuals, which could be relevant when tailoring interventions to individual patients.
Aging disturbs skeletal muscle homeostasis, which requires balance between hypertrophy and regeneration through complex and not yet fully understood mechanisms and pathways. Aging modifies the balance between muscle protein anabolic and catabolic pathways, with reduced protein uptake and probably increased catabolism, leading to an overall loss of skeletal muscle mass. As mentioned above, cellular changes include a reduction in the size and number of myofibers, particularly affecting type II fibers, with a partial transition of muscle fibers from type II to type I. Satellite cells (muscle progenitor cells) decrease, especially those associated with type II fibers. Extracellular fat increases with aging, as do intra- and intermuscular fat, infiltrating muscles in a process named myosteatosis. The metabolic and hormonal interrelations between adipose and muscle tissue seem to be important in the pathogenesis of sarcopenia. This may help to explain the still confusing relation between sarcopenia and obesity. Mitochondrial integrity in myocytes is altered, with an impairment in intracellular energy processes.
Other intracellular signaling pathways, including insulin-like growth factor 1 (IGF-1), mTOR, and FoxO transcription factors, have also been shown to be altered in sarcopenic muscles. Changes in muscle gene expression, probably mediated through epigenetic changes, have also been described.
Age-related changes in neurologic signaling and central nervous system control mechanisms, as described above, also have a role in the pathogenesis of sarcopenia. However, it is unknown to what degree, as research in this area is still limited. Untangling the role of muscle and nervous changes is
urgent, as it has a direct impact on therapeutic approaches. For example, the fact that nutritional interventions to promote increasing muscle show mixed results may be due to the fact that such interventions would not improve neural mechanisms. Also, innovative treatments aiming to improve cortical and associative brain areas may well be useful, at least in some instances.
Physical exercise, the best-established treatment for sarcopenia, has been shown to act through both muscular and neural pathways.
Finally, the relation between bones and muscles is currently being explored. Both are key to mobility and they are anatomically and physically related. Osteoporosis and sarcopenia frequently coexist and, when present together, exponentially increase the risk of some adverse outcomes, so the term “osteosarcopenia” is gaining momentum. Research has found evidence of cross-talk between muscle and bone through endocrine factors such as myostatin, irisin, osteocalcin, and many other substances, although the relevance of this communication in the pathogenesis of sarcopenia has not been established.
DIAGNOSIS OF SARCOPENIA IN CLINICAL PRACTICE
Awaiting a global consensus on the definition of sarcopenia, clinicians will probably opt to choose one of the most recent definitions and use it in their practice. The diagnosis of sarcopenia using any modern definition is relatively straightforward and requires measurement of a combination of muscle mass, muscle strength, and physical performance. All definitions use at least two of these three, suggesting different cutoff points depending on the measure used and the population.
Screening
At this time, screening for sarcopenia has not been proved to be cost- effective, either universally or in defined care settings or populations. A screening instrument, the SARC-F questionnaire, which explores five items and can be self-administered, has the strongest evidence, with high sensitivity with low cutoffs (> 1 point) and a good specificity when using high cutoffs (> 4) (Table 49-3). This instrument can be used in clinical scenarios with a high expected prevalence of sarcopenia, including geriatric outpatient clinics, hospitalized older patients, rehabilitation settings, or nursing homes. Some modified versions have been proposed, including other elements such as muscle calf circumference.
TABLE 49-3 ■ COMPONENTS OF THE SARC-F SCREENING TEST
Also, the diagnosis of sarcopenia may be triggered by the presence of signs and symptoms associated with this condition, including complaints of weakness, slowness, muscle wasting, falls, difficulties carrying out activities of daily living, or problems in rising from a chair or bed.
Muscle Strength
Because it is the most widely used definition worldwide, here we follow the suggestions of the updated EWGSOP2 that proposes a stepwise approach to the diagnosis of sarcopenia. The AWGS suggests a similar—but not identical
—approach that differs in primary care or specialized settings. The SDOC has not provided recommendations for clinical practice.
The recommended first parameter to be measured when sarcopenia is suspected clinically or by a positive SARC-F is muscle strength, usually through handgrip strength. Although measuring leg strength would seem to be more intuitive, it requires expensive equipment, cutoffs have not been defined for populations, and the link between leg strength and grip strength is consistent enough to use the latter as a proxy. Most evidence from grip strength comes from studies that use a hydraulic (Jamar) dynamometer, but other models are being progressively incorporated into clinical practice.
Grip strength measures need to be standardized by using a validated protocol (as is true for blood pressure or other physical measures). If grip strength is below the reference values for gender and population (Table 49-4), sarcopenia should be suspected, as it is the most frequent reason of muscle weakness. However, the differential diagnosis is wide and other potential causes for low muscle strength should be considered—for example hand osteoarthritis, low motivation, or neurological disorders. Low grip strength
is highly predictive of a range of adverse outcomes, disregarding its cause, and merits the launch of therapeutic interventions even when no further diagnostic tests are available.
TABLE 49-4 ■ CUTOFF POINTS TO DEFINE LOW GRIP STRENGTH AND REDUCED GAIT SPEED
Muscle Mass and Quality
The second step may be measuring muscle mass, although all experts do not agree on the need of having a muscle mass measure to diagnose sarcopenia. As mentioned, this is because all available measures are in fact estimations of muscle mass based on a number of presumptions that depend on the technique, results have a high inter- and intra-rater variability, cutoff points are inconsistent and a low muscle mass has not been consistently shown to be linked to adverse outcomes. Also, a low muscle mass may be due to malnutrition or cachexia and thus not directly caused by sarcopenia. Being said, an estimation of muscle mass would help linking low muscle strength to a muscle problem. More accurate techniques, such as the creatine dilution test are in development to measure active muscle mass. If more consistent links with outcomes are shown, they may help solve the limitations of available methods in the near future.
The most widely used methods to estimate muscle mass in practice at present are dual energy X-ray absorptiometry (DXA) and bioimpedance absorptiometry (BIA). The first has the widest evidence and definitions usually refer to it when defining cutoff points. It has the advantage that bone mineral density can be measured simultaneously. Usually, appendicular lean mass (skeletal muscle in the extremities) is measured, and in most cases adjustment for height is added to define cutoff values. BIA is useful as a bedside test; but as BIA equations and cut points are population- and device- specific, there is a lack of standardization that limits its accuracy. CT and
MRI can also estimate muscle mass, usually by a cross-sectional image of the psoas muscle or the thigh. As such images are obtained routinely in some conditions and disciplines (surgery, oncology), they may be helpful in these settings, although the link to outcomes is still to be defined in larger populations. Ultrasound has also been proposed as a simple alternative to measure muscle mass in clinical practice, and a standardization of measures (the SARCUS project) has been proposed by a European Geriatric Medicine Society working group, but it still lacks validated cutoff points for major muscles. Finding a very low muscle mass with any of these techniques, together with a low grip strength, would make sarcopenia very likely, while measures closer to the cutoff points used (that depend on gender, population, and method) may be more equivocal.
Muscle quality is a widely used term for an ill-defined concept. It may refer at least to two different concepts: (1) the relation between muscle strength and muscle mass (ie, the amount of strength delivered by each muscle mass unit) or (2) some observable characteristics of the muscle, either macroscopically or microscopically (such as inter- or intramuscular adiposity, or the rates between different muscle fiber types). Quality may prove to be a relevant concept, but as yet is not sufficiently defined for use in clinical practice. An estimation of fat infiltration or muscle heterogeneity in imaging (CT, MRI, or ultrasounds), if shown to be relevant to outcomes, would be a simple and useful approach to define a muscle problem as the cause of low muscle strength.
Physical Performance
Physical performance has been defined as the ability to carry out physical tasks in order to function independently in daily life. It measures composite functions of the whole body as opposed to the function of a single organ and depends on an intact musculoskeletal system integrated with the central and peripheral nervous systems, with significant involvement of a range of other body systems (cardiovascular, vision, balance, and other). Measures of physical performance usually involve standing or walking, and may include measures of mobility, strength, power, balance, and endurance. The most commonly used test in geriatric practice is gait speed in a 4- to 6-m track, again using standardized protocols. Gait speed has been shown to be related to functional outcomes and mortality along the whole spectrum of results, and even small losses (0.1 m/s) are predictive of impaired outcomes. Cutoffs
have been proposed (Table 49-4). There is no agreement yet on the exact position of gait speed in the definition of sarcopenia: as an outcome (FNIH), as part of the core definition (EWGSOP, AWGS) or as a measure of severity of sarcopenia that shows the impact of a reduced muscle function in body function (EWGSOP2, AWGS). Nevertheless, gait speed is probably a basic vital sign in geriatric practice.
Other measures of physical performance that are not unusual in geriatric practice are used as well to define sarcopenia or its severity. The Short Physical Performance Battery (SPPB) adds a measure of balance and of muscle strength/power/endurance (sit to stand test) to gait speed. It has been validated in many clinical settings and is associated with frailty. The sit-to- stand test has also been proposed as a proxy for muscle strength when a dynamometer is not available (EWGSOP2 and AWGS). The Timed Up and Go (TUG) test is a widely used and useful measure of physical performance. Other walking tests (6 minutes, 400 m) may be chosen in some clinical and research settings.
Biomarkers
Most of the proposed diagnostic measures are (physical) biomarkers of sarcopenia. However, clinicians are used to including blood biomarkers in the diagnostic scheme of most diseases, so a search for such blood (or other tissues) biomarkers of sarcopenia is underway. Many of the elements that are known to be altered in sarcopenia, including the neuromuscular junction, inflammation, micro-RNAs, hormones, metabolic by-products, and others, have been proposed as potential biomarkers. Research in this area is complex and to date results are inconclusive. It may well be that composite panels including several biomarkers, rather than individual biomarkers, are needed.
Underlying Causes
Once sarcopenia has been confirmed, a systematic approach that includes a comprehensive geriatric assessment is recommended to search for potential underlying causes. Frequent causes of sarcopenia are described in Table 49-
5. Most older patients will have more than one associated condition, and not all may be treatable. However, understanding the full picture of the individual patient will be helpful when deciding a therapeutic approach. When no evident cause of a gradual-onset chronic sarcopenia is present in an
older person, age-associated (primary) sarcopenia is diagnosed, and further exploration of long-term habits that may have led to sarcopenia is appropriate.
TABLE 49-5 ■ FREQUENT UNDERLYING CAUSES OF SARCOPENIA
DIFFERENTIAL DIAGNOSIS WITH CACHEXIA, MALNUTRITION, AND FRAILTY
The recent global definition of malnutrition, the Global Leadership Initiative on Malnutrition (GLIM), includes low muscle mass as one of the three phenotypic criteria that are part of the five criteria that define malnutrition (the three phenotypic criteria are nonvolitional weight loss, low body mass index, and reduced muscle mass, and the two etiologic criteria are reduced food intake or assimilation and inflammation or disease burden). Thus, when low muscle mass is found in a given patient, checking if any of the other four proposed GLIM criteria are present can help determine if malnutrition is present, since the coexistence of sarcopenia and malnutrition is not infrequent. When low muscle mass is not associated with low muscle function, malnutrition will probably be present and will be the main diagnosis of that patient. When both low muscle mass and low muscle function are present in a malnourished patient, the most usual scenario will be that malnutrition has caused sarcopenia.
“Cachexia” is a term used to describe severe weight loss and muscle wasting associated with cancer, HIV infection, or end-stage organ failure. The most common definitions of cachexia include the presence of a low muscle mass as well. In fact, cachexia and sarcopenia may coexist. Cachexia has a complex pathophysiology including excess catabolism and inflammation, and endocrine and neurological changes that are different from those described in sarcopenia. Conceptually, cachexia may be the end result of some disease-related cases of sarcopenia. International consensus definitions of cachexia may guide clinical judgment to decide if sarcopenia or cachexia predominate; this is relevant for the choice of treatment, as cachexia would not respond to sarcopenia treatment.
Frailty, addressed in Chapter 42, has a physical aspect that is closely related to sarcopenia. In fact, the three elements that define sarcopenia are embedded in the Fried physical frailty phenotype: unintentional weight loss
—that usually involves muscle mass loss, weakness (low grip strength), and slow walking speed. Sarcopenia is a major determinant of physical frailty, as it impairs the main mobility-related organ. Of course, physical frailty can be caused by nonmuscular diseases, and mild sarcopenia may not lead to frailty. Thus, they are distinct but overlapping constructs.
TREATMENT
Because sarcopenia is a complex condition—and a quite recently defined one—evidence about treatment is still limited. Understanding the
pathophysiology of sarcopenia is key to developing effective new interventions and translational research in this area is needed. Of course, addressing the causes and risk factors that are found in the comprehensive geriatric assessment may be useful in treating sarcopenia and preventing relapses, but to date no trials have been carried out with a multimodal approach. The first available evidence-based clinical practice guideline only places a strong recommendation on physical activity as the cornerstone for treatment of sarcopenia.
Physical Exercise
There is compelling evidence that supports the benefits of resistance exercise in improving skeletal muscle function and mass both in younger and older persons, and this evidence extends to patients with sarcopenia. However, the optimal mode, duration, and intensity of exercise are still ill-defined.
Resistance exercise uses weights or elastic bands in repetitive movements to improve strength and power of groups of muscles. To treat sarcopenia, both lower and upper extremities should be exercised, at least twice a week. The minimum expected time to obtain an objective improvement seems to be close to 3 months. Ideally, exercises should be started under the advice and vigilance of an expert exercise or physical therapist, in order to teach the correct way to perform each exercise and avoid injuries. Gradually the patient will be able to start unsupervised exercise. Compliance is key to obtain results; it may be improved with supervision or peer support in an exercise group.
Many trials have shown that resistance exercise can be embedded in a multifaceted exercise program that includes aerobic (cardiovascular) and balance training, and possibly stretching and joint range-of-motion movements. However, patients should be aware that the usual cardiovascular exercise that physicians recommend widely to improve aging and well-being (walking, swimming, dancing, bicycling) will not have an impact on sarcopenia. Including resistance exercise is key to improve muscle function.
A recommendation to increase physical activity and reduce sedentariness in usual life is common sense; it is perceived to help in the prevention and treatment of sarcopenia and other conditions. However, this has not been tested as a treatment for sarcopenia and should not replace a formal exercise program as the cornerstone of treatment.
Nutrition
The evidence about nutrition interventions in treating sarcopenia is less consistent, with only conditional or weak recommendations in the clinical practice guideline. However, some recommendations can be made. Protein intake is key to muscle health, and the recommendations for protein intake in older people have recently been raised from the former 0.8 g per kg of body weight per day to 1–1.2 g in healthy older adults, and up to 1.5 g in disease. Most older persons do not reach these targets, so a careful review of patient food intake by an expert nutritionist is needed to consistently increase protein intake. This approach has been shown to be feasible and effective in a large multicenter randomized controlled trial. When the required levels of protein intake cannot be reached with usual food, the use of high protein nutritional supplements may be considered (usually in the form of oral nutritional supplements rather than protein-only supplements).
Some individual nutrients may also be useful to improve muscle mass and function. The essential amino acid leucine and its metabolite beta- hydroxy beta-methylbutyric acid (HMB) have shown some effects in improving muscle mass and function in randomized controlled trials, although it is not yet clear which individuals benefit most. Mild sarcopenia seems to respond better. Trials have usually included leucine and HMB in full oral supplements, so the effect of leucine and HMB in isolated forms in older people is unknown. There is also some evidence supporting the use of fish oil–derived n-3 (omega-3) polyunsaturated fatty acids (PUFA) and creatine. Vitamin D has not been shown to improve muscle function in sarcopenic patients. It may be useful in improving muscle function in persons with severe vitamin D deficiency.
Some trials have explored nutrition interventions in combination with physical exercise in the treatment of sarcopenia, with mixed results; only a few trials show some synergistic effects. Since physical exercise increases energy needs, energy intake (carbohydrates and fat) should be adjusted to the new requirements to increase the caloric content of the diet. Supplements may be occasionally needed. Also, incorporation of proteins into muscles seems to be limited in terms of time in older people and has a higher threshold in older compared to younger adults. Therefore, a timed boost of proteins seems to be better than increasing protein content over the day.
Basic research has shown that aged muscle will most avidly take up proteins immediately after exercise.
Pharmacologic Treatments
At this time, there are no drugs approved for sarcopenia, nor are any expected in the short term. There are many reasons for this situation, including the lack of guidance from drug approval agencies on what would be acceptable clinical trial outcome measures for drug approval, uncertainty about which particular groups of patients with sarcopenia would benefit more, issues about the optimal trial design and timing, lack of strong basic science research to support most drugs, and the multifaceted pathophysiology of the condition.
Trials to date have been performed mostly with hormones. Combined estrogen–progesterone, dehydroepiandrosterone (DHEA), growth hormone, growth hormone-releasing hormone, combined testosterone-growth hormone, and insulin-like growth factor-1 have all shown negative results, but many studies have not included well-defined sarcopenic patients. Testosterone may increase muscle mass in men with low serum testosterone levels (< 200–300 ng/dL), but physical function does not seem to improve, and this drug is not devoid of side effects. In more recent trials, the initial interest in selective androgen receptor modulators (SARMs, which may be used in women as well as men) from small phase I and II trials has not been followed by convincing results from larger studies. Pioglitazone and angiotensin- converting enzyme inhibitors have also been explored with negative results.
Myostatin is a myokine, a growth factor that inhibits muscle cell growth. There is evidence that myostatin inhibition may be able to increase muscle mass, consistent with the recognition that myostatin acts as a brake on muscle differentiation, hypertrophy, and protein synthesis. A phase II proof of concept trial reported that a myostatin antibody was associated with increased muscle mass and improvement in some measures of physical performance in older, weak patients with a history of falling (but not diagnosed sarcopenia). Bimagrumab has been shown to increase muscle mass and reduce adiposity in diabetic patients, but again the effects on muscle function and physical performance seem to be limited. Myostatin inhibitor research seems to be evolving to explore changes in body composition and moving away from muscle function.
ASSESSING THE EFFECT OF TREATMENTS
There is still no agreement on what intermediate and final outcomes should be expected to improve with an intervention for sarcopenia. In clinical practice, the simplest approach is to assess changes in the parameters used for the diagnosis. Gains in muscle mass are usually difficult to assess (due to the mentioned technical issues of available instruments) and do not seem to be relevant for most older patients. Gains in muscle strength may be easier to assess but more difficult to obtain with available treatments. The minimum clinically significant difference for muscle strength is not yet defined in older patients with sarcopenia. At present, there is a case for using physical performance measures to assess improvement in clinical practice.
Improvements gait speed over 0.1 m/s or over 1 point in the SPPB are accepted as being clinically relevant. Improvement in final outcomes including activities of daily living, the number of falls, or quality of life are even more relevant but not as straightforward to determine. Patients rate improved mobility as the most desired outcome of treatment, followed by the ability to manage domestic activities, a lower risk of falls, reduced fatigue, and a better quality of life.
FURTHER READING
Beaudart C, Zaaria M, Pasleau F, Reginster J-Y, Bruyère O. Health outcomes of sarcopenia: a systematic review and meta-analysis. PloS One.
2017;12:e0169548.
Bhasin S, Travison TG, Manini TM, et al. Sarcopenia definition: the position statements of the sarcopenia definition and outcomes consortium. J Am Geriatr Soc. 2020;68:1410–1418.
Bruyère O, Beaudart C, Ethgen O, Reginster J-Y, Locquet M. The health economics burden of sarcopenia: a systematic review. Maturitas.
2019;119:61–69.
Buckinx F, Landi F, Cesari M, et al. Pitfalls in the measurement of muscle mass: a need for a reference standard. J Cachexia Sarcopenia Muscle. 2018;9:269–278.
Calvani R, Picca A, Cesari M, et al. Biomarkers for sarcopenia: reductionism vs. complexity. Curr Protein Pept Sci. 2018;19:639–642.
Chen LK, Woo J, Assantachai P, et al. Asian Working Group for Sarcopenia: 2019 consensus update on sarcopenia diagnosis and treatment. J Am Med
Dir Assoc. 2020;21:300–307.e2.
Cruz-Jentoft AJ, Bahat G, Bauer J, et al. Sarcopenia: revised European consensus on definition and diagnosis. Age Ageing. 2019;48:16–31.
De Spiegeleer A, Beckwee D, Bautmans I, Petrovic M. Pharmacological interventions to improve muscle mass, muscle strength and physical performance in older people: an umbrella review of systematic reviews and meta-analyses. Drugs Aging. 2018;35:719–734.
Dent E, Morley JE, Cruz-Jentoft AJ, et al. International clinical practice guidelines for sarcopenia (ICFSR): screening, diagnosis and management. J Nutr Health Aging. 2018;22:1148–1161.
Dodds RM, Syddall HE, Cooper R, et al. Grip strength across the life course: normative data from twelve British studies. PloS One.
2014;9:e113637.
Evans WJ, Hellerstein M, Orwoll E, Cummings S, Cawthon PM. D3-creatine dilution and the importance of accuracy in the assessment of skeletal muscle mass. J Cachexia Sarcopenia Muscle. 2019;10:14–21.
Frontera WR, Zayas AR, Rodriguez N. Aging of human muscle: understanding sarcopenia at the single muscle cell level. Phys Med Rehabil Clin N Am. 2012;23:201–207, xiii.
Ida S, Kaneko R, Murata K. SARC-F for screening of sarcopenia among older adults: a meta-analysis of screening test accuracy. J Am Med Dir Assoc. 2018;19: 685–689.
Manini TM, Clark BC. Dynapenia and aging: an update. J Gerontol A Biol Sci Med Sci. 2012;67:28–40.
Pavasini R, Guralnik J, Brown JC, et al. Short physical performance battery and all-cause mortality: systematic review and meta-analysis. BMC Med. 2016;14:215.
Perkisas S, Baudry S, Bauer J, et al. Application of ultrasound for muscle assessment in sarcopenia: towards standardized measurements. Eur Geriatr Med. 2018;9:739–757.
Roberts HC, Denison HJ, Martin HJ, et al. A review of the measurement of grip strength in clinical and epidemiological studies: towards a standardised approach. Age Ageing. 2011;40:423–429.
Rosenberg IH. Sarcopenia: origins and clinical relevance. J Nutr.
1997;127:990S–991S.
Sayer AA, Syddall H, Martin H, Patel H, Baylis D, Cooper C. The developmental origins of sarcopenia. J Nutr Health Aging.
2008;12:427–432.
Yoshimura Y, Wakabayashi H, Yamada M, Kim H, Harada A, Arai H. Interventions for treating sarcopenia: a systematic review and meta- analysis of randomized controlled studies. J Am Med Dir Assoc.
2017;18:553.e1–553.e16.
Chapter
Mobility Assessment and Management
Valerie Shuman, Caterina Rosano, Jennifer S. Brach
INTRODUCTION
Mobility problems are pervasive in older adults. Mobility limitations affect personal independence, need for human help, and quality of life. Limited mobility predicts future health, function, and survival. Like other geriatric syndromes, mobility disorders are often caused by diseases and impairments across many organ systems; so evaluation and management require multiple perspectives and disciplines. Health care providers should be able to assess and treat mobility problems. They should be able to measure and interpret clinical indicators of mobility such as gait speed and the short physical performance battery. They should know the physiologic and biomechanical mechanisms underlying normal and abnormal mobility, the differential diagnosis of the causes of mobility disorders, and the approaches to management of mobility problems. With rapid technological advancements in the measurements of mobility, geriatricians should be aware of novel assessments that could be used in the clinic.
DEFINITIONS AND METHODS OF CLASSIFICATION
Defining Mobility
Mobility is the ability to move one’s own body through space. Mobility requires force production and feedback control systems to navigate the mass of the body through a three-dimensional environment. Walking is the fundamental mobility task for human life. Mobility also includes a wide range of other important activities that require moving one’s body, from turning over in bed to climbing stairs. Mobility tasks have an inherent
hierarchical order based on the biomechanical and physiologic demands made on the body. This orderedness is apparent in the developmental tasks of infancy and childhood when mobility independence is first achieved. The simplest and first mobility task is turning over in bed, followed by sitting upright, transferring from lying to sitting and from sitting to standing. From there, individuals progress to locomotion with an increased base of support (like crawling or using a walker), independent two-legged walking, and finally to more challenging tasks like ascending and descending stairs, running, climbing ladders, and playing sports.
Learning Objectives
Describe the prevalence and consequences of mobility disorders in older people.
Identify strategies to detect and evaluate mobility disorders.
Key Clinical Points
Loss of mobility is a major cause of disability in older people but is rarely assessed in typical clinical practice.
The assessment of mobility involves taking a history and performing a physical examination that identifies cardiopulmonary, musculoskeletal, psychological, and neurologic contributors.
Mobility can be assessed by a number of simple performance tests including gait speed, the short physical performance battery, and the Timed Up and Go test.
Interventions to promote mobility include addressing underlying impairments, therapeutic exercise, assistive devices, home adaptations, and caregiver training.
Characterize approaches to the management of mobility disorders.
Mobility disability is best defined within a conceptual framework such as that of the World Health Organization’s International Classification of Functioning, Disability and Health (ICF) (Table 50-1). The concept of disability in general has transitioned from being considered a biological consequence of pathologic processes to a social construct influenced by personal and environmental factors. For example, the inability to climb 12 stairs may be considered a disability in a region with primarily two-story homes but not in an area largely made of ranch-style homes. Mobility disability occurs at the level of the whole person and can be manifested by difficulty in carrying out basic hygiene activities such as bathing or by limitations in participating in life roles, such as shopping for family members. Mobility disability is often defined by functional limitations in walking, transferring, or climbing stairs, which are caused by problems with strength, endurance, coordination, balance, and range of motion. Impaired dual-task capacity, such as attending to cognitive tasks while walking, is another clinical manifestation of mobility disability. These functional limitations can be caused by numerous pathologic processes at the body structure level. Mobility disability can precipitate a cycle of negative consequences because it often leads to decreased activity, which in turn worsens functional limitations and causes organ system deconditioning, including muscle weakness, loss of joint range of motion, and poor cardiovascular endurance. Whether or not an individual is classified as having mobility disability can be modified by psychological, social, and environmental factors.
TABLE 50-1 ■ MOBILITY DISABILITY AND THE DISABLEMENT PROCESS
Classification Methods for Mobility
Mobility classification is often driven by a tacit assumption of orderedness. Few single current instruments assess the full range of mobility from the lowest levels of rolling over in bed to the highest levels of endurance and coordination required for athletics or dance. The Patient-Reported Outcomes Measurement Information System (PROMIS®) measures, an NIH-funded
initiative, include both computer-adaptive tests and fixed short forms that are highly comprehensive assessments of domains, such as mobility. These batteries are used in both research and clinical practice. Mobility assessment
tools for older adults generally address one or more of the following three mobility levels: nonambulatory, ambulatory, and vigorous, corresponding broadly to Tinetti’s levels of frail, transitional, and vigorous mobility status. Mobility levels are surprisingly stable. While day-to-day variability does occur, people in general tend to remain in one level or decline very gradually unless a major event has occurred. Within the nonambulatory level, there are important mobility skills that affect independence in personal care activities, care needs, and demand for human help; these activities include bed mobility, self-transfer skills, and wheelchair mobility. Within the vigorous mobility level, the degree of fitness, as represented by the ability to perform demanding or challenging mobility activities, may be a useful indicator of the extent of physiologic reserve. As an indicator of the extent of physiologic reserve, vigorous mobility status may be a marker of ability to tolerate physiologic stressors such as acute illness, surgery, or periods of reduced activity.
Mobility can be assessed by self-report, professional observation, or performance-based measures. Instruments to assess mobility from all three perspectives have been developed. We selected instruments with established reliability and validity for use with older adults (Table 50-2). Each has advantages and disadvantages, including floor or ceiling effects. These tools have been used to estimate the population incidence and prevalence of mobility disorders, predict the consequences of mobility problems, screen patient populations, determine care needs, and determine reimbursement of services in rehabilitation settings. More detailed instruments have been developed specifically for sorting out causes and mechanisms. Detailed instruments for diagnosing the etiology of mobility limitations will be discussed separately in the section on causes of mobility limitations.
Mobility measures have varying strengths and limitations, depending on characteristics such as reliability and validity, respondent burden, feasibility and convenience of use, and assessor skill required.
TABLE 50-2 ■ INSTRUMENTS USED TO SCREEN AND CLASSIFY MOBILITY
Self-report measures are the easiest type of measure to obtain when gathering data from large populations. Self-report measures have high face validity in that they reflect the opinion of the person themselves. Various self- reported walking measures predict performance of functional mobility in older adults. Since self-report measures usually ask about a period of time, such as recent weeks or months, they can identify fluctuating ability over time. Self-report measures can be limited by problems with reliability, accuracy, and nonresponse. Because these are usually ordinal scales, they lack ability to discriminate small but important differences.
Professional observation measures reflect the opinion of an experienced assessor and may be more feasible when an individual is considered an unreliable informant or is unable to cooperate with testing (eg, individuals with severe cognitive decline). Professional report is limited by the need for assessor experience and training and can be vulnerable to problems with inter-assessor reliability unless extensive efforts to standardize assessment are made. Since professional reports are usually based on ordinal scales, ability to discriminate small but important differences might be limited.
Performance-based measures are independent of assessor opinion. Most performance measures produce continuous quantitative results, which allow for discrimination of small but important or subclinical differences.
Performance-based measures are limited in that they require direct observation, subject understanding and cooperation, and standardization of instructions and procedures. These tools measure capability rather than actual daily mobility activities (ie, the assessment of gait speed in a clinician’s office is not a valid assessment of a person’s gait speed at home). The need for subject cooperation can lead to problems with nonresponse.
Performance-based measures do not account well for short-term fluctuations over hours to weeks. Despite these limitations, performance-based testing may have direct application in clinical settings because it is brief and can provide useful information.
Psychological aspects of fear, attention, mobility confidence, and activity avoidance can have great effect on mobility disability, separately or in combination with observable mobility limitations. Several instruments to assess psychological factors related to mobility disability have been developed (Table 50-3). Items from these scales might have use in the clinical setting.
TABLE 50-3 ■ INSTRUMENTS TO ASSESS PSYCHOLOGICAL FACTORS RELATED TO MOBILITY
EPIDEMIOLOGY
Prevalence
The epidemiology of mobility disability can be considered from the perspective of basic or higher-level mobility. Basic mobility problems map to the nonambulatory to ambulatory range; higher-level mobility problems map to the ambulatory to vigorous range.
Mobility disability increases dramatically with age; dependence in getting around inside increases from 5% of persons aged 65 to 74 years to 30% of persons aged 85 or older. Women tend to have higher rates of mobility disability than men, 30% compared to 23% of those 65 years and older. Racial differences exist in older adults as well; 34% of Blacks/African Americans, 23% of Asian Americans, 35% of other/multiracial background, and 25% of White people self-report experiencing mobility disability. Those with higher levels of depression report greater prevalence of mobility limitations.
Basic mobility problems include tasks such as getting around inside the house and transferring from bed or chair. Examples of higher-level mobility problems are getting around outside the house, ability to walk one-quarter or one-half mile, and ability to climb stairs. Basic mobility problems are uncommon in community-dwelling older persons but are very frequent in
institutionalized older people. Among community-dwelling persons aged 65 and older, approximately 5% are dependent in getting in and out of a chair or bed and 7.5% are dependent in getting around inside. Among institutionalized persons older than 65 years, approximately 80% are dependent in getting in and out of a chair or bed and getting around inside.
In the past decade, basic mobility problems have decreased in prevalence for those older than 85 years, while remaining stable for those aged 65 to 84 years old. The decline in prevalence observed in the oldest old may be due to decreasing rates of disability in some chronic diseases in late life; for example, arthritis and heart disease, the two diseases that have been most associated with increased disability, have become less disabling. With advances in the management of these conditions, progression is delayed; consequently, the disabling effect on mobility is reduced. At the same time, there has been an increase in the incidence of obesity and sedentary lifestyle in middle age and among young old: these are also major contributing factors to disability, and these trends may explain why there is not a reduction in disability among the younger old.
Higher-level mobility problems, defined as difficulty walking a quarter mile or climbing stairs, increase with age, are more common in women than in men, and appear to be decreasing. Approximately 13% of Americans older than 60 years report higher-level mobility problems in that they have difficulty going outside the home alone. There is a marked increase with age. There is also geographic variation; self-reported difficulty going outside the home alone is more common among older adults in the southern United States.
Risk of Adverse Consequences Associated with Mobility Status Mobility problems have serious consequences. Mobility status predicts
mortality. Older people with difficulty walking 2 km or climbing one flight of stairs are twice as likely to die during the next 8 years compared to those with no difficulty. Mortality risk is even higher among those who have mobility difficulty and are also physically inactive. Poor mobility performance, even in the absence of self-reported mobility limitations, is an independent predictor of death. Among persons who report no mobility problems, gait speed less than 1.0 m/s is associated with an increased risk of death. Older persons who have a 0.1 m/s decline in gait speed over 1 year have a double risk of dying during the subsequent 5 years, whereas older
persons who have a 0.1 m/s improvement in gait speed over 1 year have a 40% decreased risk of dying in the following 8 years. Improving mobility through exercise is associated with reduced risk of future falls. In a pooled analysis of over 30,000 community-dwelling older adults, each 0.1 m/s faster gait speed was associated with a 13% reduction in risk of mortality.
Poor mobility performance is an independent predictor of future self-care difficulty and mobility disability. Among community-dwelling persons older than 70 years without disability, baseline physical performance score was a powerful predictor of incident disability in both activities of daily living and in higher-level mobility disability. Mobility self-report and performance have been shown to predict disability in older populations from numerous countries and cultures, including Mexican American, British, Italian, French, Dutch, Spanish, Scandinavian, Australian, Japanese, and Chinese.
Poor mobility performance is also an independent predictor of hospitalization and nursing home placement. In a population-based study of nondisabled older adults, poor baseline physical performance score was associated with a twofold increased risk of hospitalization and more days in the hospital over the following 4 years, independent of baseline health status. The risk was mostly associated with hospitalization for dementia, pressure ulcer, hip fractures, other fracture, pneumonia, and dehydration.
Mobility may be part of an underlying constellation of core factors that link multiple outcomes associated with aging. Poor mobility, as measured by timed chair stands, is one of four factors proposed to be common risk factors for geriatric syndromes (the others are incontinence, falls, and functional decline). Conversely, good mobility, along with good cognition and nutritional status, is an independent predictor of recovery of functional independence after a period of disability. Abnormalities of gait and slow gait speed have been found to precede the onset of cognitive decline and dementia, especially vascular dementia. Among older adults, simultaneous abnormalities of mobility, cognition, and mood are more common than would be expected by chance, perhaps implying potential common underlying causes.
Severe mobility disability, sometimes called immobility, has widespread and devastating consequences. It accelerates impairments in multiple organ systems, including bone, muscle, heart, circulation, lung, skin, blood, bowel, kidney, nutrition, and metabolism. Loss of organ system function can be rapid and severe; muscle strength can decline by 1% to 5% per day of enforced
bed rest. Skin breakdown and pressure ulcers start to occur after only hours of persistent and unrelieved pressure. Major consequences of clinical significance include decreased plasma volume, orthostatic hypotension, accelerated loss of bone density, muscle weakness, decreased pulmonary ventilation, and constipation leading to fecal impaction. When even temporary bed rest is combined with the increased vulnerability of aging and acute illness, there is a marked increased risk of death, disability, and institutionalization.
PATHOPHYSIOLOGY
The causes of mobility problems are complex. Unique and complementary etiologic perspectives can all contribute to a better understanding of mobility. Three perspectives are described here: biomechanical, biomedical, and biopsychosocial. Each has advantages and disadvantages when used as frameworks to understand mobility problems.
Biomechanical Perspective: Direct Assessment of the Body in Motion Age affects the biomechanics of walking. Normal gait can be defined in
terms of the gait cycle with two main phases: stance and swing (Figure 50- 1). A normal cycle begins with a push off from the forefoot, then a swing through and heel strike, timed tightly to be followed by the push off of the other leg; normal gait initiates at the ankle, not the hip. Normal gait has highly characteristic patterns of foot, ankle, knee, hip, pelvis, trunk, and arm motion. Gait biomechanics can also be viewed from the perspective of the pattern of steps (footprints) (Figure 50-2). Step length is the forward distance between two foot falls. Stride length is the distance covered by one foot until it falls again. Stride length is, therefore, twice the step length, assuming the step lengths are the same on both sides. Step width is the lateral distance between two foot falls. With age, gait speed slows, step length decreases, and the proportion of the gait cycle when both feet are in contact with the ground (double support time) increases. During gait, older people compared to young adults tend to have more thoracic kyphosis, more posterior pelvic tilt, decreased hip extension, and greater external rotation of the foot. Older people tend to generate less power from the ankle and use hip flexion to compensate more than young adults. Normal gait has a very regular spatial and temporal pattern. An irregular gait can be either regularly irregular, like
a limp, or irregularly irregular, with no pattern at all (Figure 50-3). Irregularly irregular gait, often called gait variability, predicts falls and mobility disability.
FIGURE 50-1. Human walking.
FIGURE 50-2. Step patterns in human walking.
FIGURE 50-3. Gait patterns.
Normal walking maximizes energy efficiency. When walking changes owing to biomechanical alterations caused by disease or aging, walking becomes more energy demanding. Normal walking also requires excellent control of balance and timing. When problems develop with balance and timing, the priority for safe walking may be to increase stability and support at the expense of losses of energy efficiency. Thus, many changes with aging increase the energy cost of walking and decrease gait efficiency.
A biomechanical perspective can be applied broadly to mobility and balance, based on the increasing biomechanical demand placed on the body in motion. Typically, tasks are considered more challenging as the base of support narrows and transfers of mass over the base become more
demanding. Thus, difficulty increases from sitting to standing to walking, to climbing stairs, walking a line, or running. A biomechanical approach to postural alterations and body-segment movement abnormalities can be useful for identifying causes of, and solutions to, mobility problems. Specific abnormalities can be addressed by targeting rehabilitative programs or the type of assistive devices. Limitations to the biomechanical framework include lack of consideration of how various physiologic or external factors impact mobility problems.
Biomedical Perspective: Using Organ System Impairment to Link Function and Disease
The causes of mobility limitation can be assessed from a physiologic standpoint. There are three main physiologic components of mobility: balance control (neurologic system), force production (cardiopulmonary and muscular systems), and structural support (skeletal system—bone and joints). Ferrucci created a framework that identifies six main physiologic subsystems that influence walking ability: central nervous system, perceptual system, peripheral nervous system, muscles, bones and joints, and energy production. Another way of organizing the systems that affect walking is to consider inputs and outputs. In this approach, the physiologic elements of mobility can be assigned to three main components based on a sequence of information from (1) sensory inputs to (2) central processing to (3) effector factors that carry out instructions from the brain. Sensory inputs include vision, vestibular function, and peripheral sensation. Central processing includes level of attention, rapid integration of sensory inputs, coordinated timing of multiple segmental body motions, and postural reflexes. Effector-related factors include generation of muscle strength and power, endurance, pain, speed of reaction, and flexibility. There are several important emerging areas of knowledge related to our understanding of the physiologic contributors to walking. Brain abnormalities found on magnetic resonance imaging (MRI) are associated with alterations in attention, rapid processing and integration of multiple sensorimotor inputs, and abnormal gait. Small vessel cerebrovascular disease in the absence of stroke and MRI findings of “white matter disease” or focal grey matter atrophy are associated with slower gait speed, even among high-functioning older adults. These MRI findings predict the onset of mobility disability. Such brain abnormalities appear to be most pronounced in the frontal areas and basal ganglia, regions that are most
vulnerable to changes in cerebral perfusion with older age. These areas have been found to concurrently affect mobility, cognition, and mood and may suggest a shared underlying cerebrovascular process. Another emerging area of knowledge is related to subclinical losses of dopaminergic transmission in the brain in the aged. These losses of dopaminergic function also contribute to altered mobility and may present clinically in patterns that differ from traditional Parkinson disease. Dopamine deficiency in the aged may be related to cerebrovascular disease. Loss of oxygen-carrying capacity because of anemia has been recognized as a potential contributor to mobility limitations, especially in Caucasians, perhaps owing to decreased endurance but also possibly because of chronic subclinical ischemic effects on the brain. Disorders of glucose regulation, including poorly controlled diabetes, may also affect the integrity of brain motor areas and compromise mobility control. Research on the links among cardiovascular risk factors, brain structural abnormalities, mobility, cognition, and mood is developing rapidly. In the future, it may be possible to refine these observations into diagnosable and possibly treatable disorders. It is possible that attention to cardiovascular and metabolic risk factors might reduce the risk of developing some of these brain abnormalities and thus potentially reduce the incidence of mobility problems.
A physiologic perspective on mobility helps define organ system impairments, which can be linked to treatable diseases, conditions, and pathologic processes, as described by the disablement process in Table 50-
A physiologic perspective is also helpful when accounting for the multiple interacting health problems of many older adults. When more than one organ system that is important for mobility is impaired, the risk of mobility problems increases. Thus, mobility can be affected when one system is severely disrupted, when several are modestly disrupted or when many are mildly disrupted. A physiologic perspective also helps account for the phenomenon of stress-induced disability. Organ systems have excess capacity, called physiologic reserve. Losses of organ system function can be clinically unapparent because these are losses of unused reserve or because one system is compensating for another. Many subclinical physiologic losses may not be recognized until stress is placed on the system by further physiologic decline or by an unexpected high demand on the system. When sufficient reserve is lost or a compensating organ system fails, mobility disability becomes overt. For example, when persons with many subclinical
physiologic losses face an unexpected mobility demand, such as walking on ice at night, they may face a demand that is greater than their mobility capacity. Mobility reserve can now be assessed. One way to challenge reserve is to perform simultaneous physical and cognitive tasks. Described as “dual tasks,” older individuals may deteriorate significantly when asked to walk and talk or walk and perform a mental calculation. Mobility tests that incorporate obstacles also assess reserve. Tests of gait variability may be another way to detect subclinical change in mobility.
A physiologic perspective can help define interventions, based on the disablement structure described in Table 50-1. Treatments could be aimed at managing the underlying pathologic conditions that are causing the impairment (eg, improving cardiac ejection fraction through treatment of congestive heart failure), treating the impairments themselves (eg, strength training), or creating compensations and adaptations at the level of functional limitations (such as using a cane). A physiologic approach offers a constructive way to connect biomechanical and clinical mobility assessment to a biomedical model of diagnosis and treatment. The individual is the focus of the biomedical model, however, and addressing biomechanical and clinical factors may improve the physiologic impairments of mobility problems but not address mobility disability.
Biopsychosocial Perspective: Addressing Personal, Social, and Environmental Contributions
The experience of mobility disability can be viewed as mismatched personal capacities (physiologic and psychologic) and social and environmental demands. Psychological factors that influence mobility include negative attitudes toward aging, fear of falling, and poor attention and emotional vitality. Self-reported conditions associated with increased risk of new higher-level mobility disability include baseline and incident heart attack and stroke, baseline hypertension, diabetes, angina, dyspnea, exertional leg pain and incident cancer, and hip fracture. Epidemiologic risk factors for the onset of higher-level mobility disability include demographics, diseases, and health behaviors. Among the demographic factors associated with increased risk, advancing age has the strongest effect, with lower income, and lower educational level also playing a role. Behavior-related risk factors include current smoking, alcohol abstention, low physical activity, high body mass index, and high waist circumference. Exposure to some of these factors as
early as in mid-life can influence development of disability later in life. Physical activity, a health behavior that is a key to mobility, is influenced by multiple psychological, social, and environmental factors. Common reasons given by older adults for limiting or avoiding physical activity include lack of an exercise companion, lack of interest, fatigue, fear of falling, weather, and safety. Self-reported conditions identified as major barriers to physical activity by older adults include arthritis and past injury.
Environmental demands can exacerbate mobility limitations or reinforce positive behaviors. The experience of disability is lessened when older adults’ physical capacities match the challenges present. An overly challenging environment reduces access, for example, an ambulatory older adult who struggles with stairs cannot enter a restaurant which lacks a ramp. Alternatively, inappropriate simplification of environments reinforces maladaptive behaviors with negative health consequences, for example, use of a power lift chair by an individual with intact lower extremity strength reduces use of leg muscles, potentially initiating muscle mass loss and resulting in difficulty standing from chairs without assistance. Social support follows a similar pattern. Too little social support increases mobility disability, whereas inappropriately high support diminishes opportunities to reduce development of mobility limitations.
Advantages of the biopsychosocial model include a holistic view of all influencing factors related to mobility disability. Disadvantages are the complexity in identifying, evaluating, and intervening on disposing factors.
Gait Speed as an Integrator of Multiple Approaches
Walking is the foundation of mobility, is influenced by biomechanical and physiologic processes and environmental factors and is a major driver of disability. Walking speed is considered a “vital sign” in older adults. While there are several influences on measures of gait speed such as leg length, gender, or periods of acceleration and deceleration, in general, gait speed can be interpreted clinically. Normal usual walking speed in the older adult should be at least 1 m/s. When measuring a fixed walk distance, this means that the time should be less than the distance in meters. For example, a 4-m walk time should be less than 4 seconds. When measuring a fixed time, this means that the distance should be more than the time in seconds. For example, the distance walked in a 6-minute walk (360 seconds) should be more than 360 m. The normal step frequency of walking (called the cadence)
is a little more than two steps per second or somewhat more than 120 steps per minute. With approximately two steps per second at a normal pace of 1 m/s, there is approximately 0.5 m or 20 in. in a step. Clinically, this translates into approximately two shoe-lengths per step, or an imaginary “shoe” in between each step (see Figure 50-2).
As a general rule, there are links among walking speed, energy expenditure, and disability. Energy expenditure can be measured in metabolic equivalents (METs). One MET is the energy requirement for lying in bed.
Two METs is twice the energy requirement for lying in bed and is approximately the energy requirement for self-care activities. The usual energy cost in METs of walking at various speeds has been reported in relationship to miles per hour, and walking speed can be translated directly between meters per second and miles per hour. Therefore, the energy requirements in METs and the expected activity level can be associated with gait speed, as described in Table 50-4. This conversion table allows clinicians and researchers to approximate the functional status of individuals or populations based on any measure of gait speed. Using METs as a basis for function, a sequence of mobility capacities is described in Table 50-5.
TABLE 50-4 ■ TRANSLATING WALKING SPEED: WALKING SPEED, METS, AND FUNCTION
TABLE 50-5 ■ EXAMPLE OF A SEVEN-LEVEL CLASSIFICATION OF MOBILITY BASED ON ENERGY EXPENDITURE
EVALUATION
Strategy for the Clinical Encounter
There are no established standards for the overall evaluation and treatment of mobility problems in older adults. Currently, the most common approach is similar to the ones used for other geriatric syndromes. These approaches are all based on a biopsychosocial model that incorporates biomedical, rehabilitative, and psychosocial elements and a multidisciplinary team. The
initial goal of assessment is to classify the mobility problem into one of three large groups: nonambulatory, ambulatory, or vigorous. For the person who presents in a wheelchair or bed, one can screen for ability to stand or walk with assistance. For ambulatory individuals, a quick sorting strategy is to observe gait. Since most gait parameters are highly interrelated, abnormal gait can be grossly distinguished from normal by a few simple characteristics like use of a gait aid, gait speed less than 1 m/s (or step length less than twice foot length), or step asymmetry. Persons with normal gait can be assessed for higher level fitness by use of one or more screening tests for higher level abilities such as single foot stand for 30 seconds, ability to tandem walk or ability to walk more than 450 to 500 m in 6 minutes. Persons with normal walking but inability to perform higher level tasks might be good candidates for exercise programs for well elders or for further evaluation if the mobility change is recent or causing problems.
Further assessment of the nonambulatory or for those with abnormal gait depends in part on the treatment goals. The team can select a basic strategy; either to try to improve mobility or to compensate for irreversible mobility loss (Figure 50-4). This decision is based on patient preferences, the time course of the mobility loss, the potential to reverse impairments, and the ability of the patient to participate in treatment. For example, a person with severe cognitive deficits or severe irreversible motor paralysis might be considered more appropriate for compensation than for interventions to improve mobility. Planning for compensation might target mobility care needs and resources. For the person considered to have potential for improving mobility, the major decisions are the timing and value of interventions directly on mobility (usually through exercise and rehabilitation) and on underlying physiologic impairments (usually through medical team care). In primary care, the provider can screen and triage function by recognizing mobility disorders, assessing potential for intervention, and referring to other providers as appropriate. Table 50-6 gives examples of a quick system of assessment for symptoms and clinical findings based on the three main involved organ systems. The primary provider can identify and treat overt clinical impairments that can be detected quickly in the clinic, like weight-bearing pain caused by osteoarthritis, or dyspnea caused by congestive heart failure. When the cause of the mobility problem is less obvious, a referral to multidisciplinary team for comprehensive mobility assessment should be considered. Since the
potential causes range across many organ systems, this strategy might be more efficient than referral to several organ system–based specialists.
Research into mobility problems is an active and high-priority area in aging; in the future, efficient clinical practice and referral may be better informed by evidence.
FIGURE 50-4. A clinical strategy for assessment and management of mobility problems.
TABLE 50-6 ■ BRIEF EVALUATION OF MOBILITY PROBLEMS FOR USE IN PRIMARY CARE
The comprehensive approach to the clinical assessment of treatable causes of mobility is currently a specialized referral function that is resource intensive. Evaluation starts with the clinical assessment of the severity, course and consequences of mobility limitation, and the determination of potential to improve mobility. Mobility performance is assessed in more detail, including biomechanical aspects of functional limitations during movement. Potential contributing factors are identified based on physiologic impairments, and evidence is sought for psychosocial and environmental influences.
Clinical Assessment of Mobility Performance
In ambulatory patients, simple assessment of gait speed is a useful place to start. The gait speed can be used to estimate function as described above. Gait can be assessed in more detail from a biomechanical perspective (Table 50-7). Gait can be examined for general characteristics like path deviations, irregular or variable stepping, or a widened base of support and for altered motion of the component parts: trunk, arms, hip, knee, ankle, and foot.
Mobility tasks can be examined for performance difficulty or altered
movement patterns as task demands increase. Sometimes, the finding of an abnormal body-segment movement or gait characteristic suggests a specific impairment or disease. More often, the abnormality is nonspecific but is amenable to direct intervention in rehabilitation. Clinical gait assessments that capture many of these biomechanical elements (eg, variable stepping, path deviation, hip range of motion, arm swing) are the modified Gait Abnormality Rating Scale and the Tinetti Performance Oriented Mobility Assessment.
TABLE 50-7 ■ EXAMPLES OF A BIOMECHANICAL ASSESSMENT OF COMMON GAIT ABNORMALITIES AND POSSIBLE CAUSES
Mobility assessment scales usually have a functional rather than a biomedical perspective and are not designed to detect specific impairments. These scales can be used to identify areas for task practice or adaptation in rehabilitation and to assess the effects of treatment. When selecting an
assessment tool, level of mobility of the patient should be considered. For nonambulatory patients, assessments should focus on bed mobility and transfers. The Hierarchical Assessment of Balance and Mobility assesses mobility, transfers, and balance and detects multiple levels within the bed and chair. For ambulatory patients, assessments should focus on walking including unchallenged (straight path walking) and some challenged walking. For patients with vigorous mobility, assessments should focus on more challenging walking tasks such as uneven surfaces, obstacles, curved paths, dual task, and walking for longer distances. Some clinical measures focus on one mobility group (ie, nonambulatory) but many clinical assessments of mobility are based on a hierarchy of task difficulty and will span the various mobility groups. One such measure, the Berg Balance Scale, assesses 14 tasks of progressive difficulty from sitting balance to one leg standing and rising onto a step. The Physical Disability Index has eight mobility tasks including six items for nonambulatory persons. The Activity Measure for Post-Acute Care (AM-PAC) was developed specifically for use across post- acute care settings (from inpatient to outpatient settings) and can also capture a range of mobility. The AM-PAC is based on patient responses and clinician observation, and captures three domains: basic mobility, daily activities, and applied cognitive. The Dynamic Gait Index, which includes eight challenging gait items such as changing gait speed, walking with head turns, and stepping over an obstacle, may be appropriate for patients with higher level (ie, vigorous) mobility.
Differential Diagnosis Based on a Physiologic Perspective
A clinical schema for the comprehensive evaluation of mobility that is derived from Ferrucci, Tinetti, and the authors’ own work is proposed in Table 50-8. Many impairments are detectable through the usual geriatric clinical history and physical examination. Some sensory systems are amenable to clinical evaluation. Some impairments are hard to detect in the clinic and require further testing. Vestibular testing may be helpful when there is unsteadiness that is not well explained by other impairments or when specific vestibular symptoms are present. Electrodiagnostic testing of nerve conduction velocity or abnormal muscle activity may be indicated when neurologic findings are suspicious. Exertional chest pain or dyspnea requires appropriate cardiac and pulmonary testing and a screen for anemia. Leg pain
on exertion suggests testing for peripheral vascular disease or lumbar stenosis.
TABLE 50-8 ■ ASSESSMENT AND TREATMENT OF ORGAN OR SYSTEM IMPAIRMENTS THAT CAUSE MOBILITY PROBLEMS
1�1נfb·ti@fJ.!/il)tlfAI IMPILIRMENT AS5f.SSM(NT
דREATME�T
S�n�Qry I Eyia
lr}'i:
A�שtנ'
\>ש-iןכlוt:ס! \iisiu,,
א�.i.r 3.lil� וזb-y
Sננdlcn dו'i!.r[
Coנו fro11!�t1oנ1
Pu�byזJןז.i� .111d \ג1t�rnru
Gl�l1(Q]im and &וrokc·
kn��i!lw.�uI!'l'rf
Pri5ilrt
[y�
D�ptlil
Dqכtlנ t6ti�S
Moז'IOC'ttl,ו.r l..i13hוi1\g. �11יnlr1J:tt, end
void multi{Q;CQJ teנו
"יh!.,lc •.lkl1נg
Cc11tr:.זl
·:ye.
c�t1bר._1lar
ו Vc:.tibul;נr fuipl.w,at
11.וtl"Vi:
Circul�l.wזג
וB iמ
Daזk �aptatlrnג 0[מli!l1�
.S.זביוnici,rt111!.נr
c.:111.נls
Neuropנוhy
Sdf-זcpoc!
Sclf-זcpqr1d
iוו:�ppזo-print,:י �iוןו·im.g
�tווsa.וloוi, g.mn�r.illy l.istl:חg: 1-2 nוinuteנ Abilil to per.:ei"lc'
rola.1 lan �nd.
::וioccl�naוtnנו
f-יilנm@ו11� נnd
Vibrat.ory���
HyIN,.>ic-1נMט.ו: \!ס<J
tJ101ta i
wtl a\!°e't11LJ)fl, מ�t Sfmbטl Sulnitltulio.n Tc:;t, 11mll 'frn.il Making BTcs[
1·i!גiic11gcnt�[חr g.!la-uwni11. Bi;ווie;1ו ro�irwור. 1 vc:rtiSti'
M(-111 rr di�e
Oiabeoo��nd p.e,rip.he,1:':וl
'Vl'IK,Ula!' diSiזת5�
Mיזdi tiu1J11�, :uזythll)iiils;, pos!ip-תtנדt!1il hrpoו:enrioo. sגnd d.clנyיdratiolil
ן
hypcח:lc11�it�..[oc11I
,גlrophy, n,d bro,\11 t111iינn:1 �
Chגn� mונdi;;�tl!ג'י� aןן,i
ljs,l\tiווg
{-::יוnaliו!ך rtp('ן�\fiQ11·in& 11\-י.ו:ic�wer o:rm�rl:,י Ep>��)' 11נ,11נtu,1e:r�iגntl w:.tlbul.ir relנabilillLtL-nlil
M1:dic�liט11:. ilnd vc-.בוibu•
l.ar rehabilJtat on
1-Iilptic l!'nח:i.וזcשו1i!lil[
{� l�t:t) • td foo!דיוe.a.r
Ghoבנgt ןו!C<Ji.Q@בנ W'lli 'jןתythmi.Q,,:,r��m� iת�w:.וse 81111.d iJ.�j,1 Cha11� rnedi�ion,
ltM'roj�. ;ג.nd b!wd
pusi;urc CQ.ntrol
EfTשאt
Rm מ Mש d
Pa�זu,r,ד.l ufle,x�
5:tung h
Rlglוtוng rt:lile�
,li,i;:;.nu I נגtw,;le 1�
:.זnd str�ונgtlג-bג�d moונons (ו::l1sר.ir-וrisc 11ו11d =:qLMt)
Po,.rkin.= di�ea.sc imd.olh1tr 1•i11I מf וומוi-P.irk3r1�nn
r dlwrוJmdise� 111 i זic,ru
Exe�I t!, broג�i11g, t\יו.i. nwdlc:itilכn 1.1.�ז;ust.rueווt
Muscul.mkcictn[ i:le:זjbilily
H�arl C i'י!.i.�my,;ז�l1y
Oםnlmcture�mוd.ro��
of1תtנ1גon (ROM)
Lכy prו,�.1, al " o
1ווjury, <1.זllו rllנs,.i.n.d. lnBCli'י'ily
·�ool�(lrdm t lic·
Aclivc 11nd p.issiw:
R0�1 גווd or hotic.s
$tarוd;1rd �1N" fס �toJic
!'זJנ.d11r •
lrrns ] lypoo:iנ or
cגדdu.r,rו,n� rcduc&j .tir f!G\\'
C rcul.רוtoנר. Peו1ןוli'.:n!
oנ\ eגwrtion, tluid rel�lltL(llll, ;rrnl םch.o.c.iro.ios;ram
Dyspוiea נ:<!5"!: or oח eנ:crtit)fl. hypm:la. md ו.le rea,.;oo�l)(l'I
Leg p�in oזן eie:ert on,
,dr t'uncוi.orו
C l1ro11l:c o' w:irt1ctive pul. m(גח�rן,י dJ��eי. a&llווד1�. and otlוer l uוg J\.rt«loנסJ�sJ:s:וnd
:mtdia toli ci)·$fו. Jon
Stand:in:I pLוlmon�ry cגr!!
וגנרd מ:x�Jן
Medו1:iנ.1 .md sו.ג:1-glc..l,
e111dד.ו:.nוt!'!CC v.ר�u,l;it11rc
1-!�rrוו�tn!ogit:. Anemia
('\lld\lr;)ltl i:
Mutd.כ !'זi!ח.:op1נn;ב
C'JוtitLC.lוJוfl-«'
dcו:rc�;;כd pנ,וl�;;:51. aןן,:1 'י'cחnLJS i11sufficicnו:;y
bJ'klil�
l)y�p�ןcli ו:בr faligז1c ו:בliו Mtוltip!c .:�11�e�
(·rtiw
L�g falוguו:i on crז:it>וו lnrtGti\1it)'
iLךןd cxen;l5i;-
,c-;,,t ba:1.ed oמ ��e- rוזןd
tt\1111 •fט i1111 ro �rl;י
&.י:ו::r.:i�i'ווךd po:.�i1il>' tr,o
phic ��111� m tnי! fuזtuire!
וlי,{tו,ciLluskctcי- Bun<: !lג�i jOiווt
\\'�i.g/ו�-b�.i.ti qg קבin 0
' •
thfili$ Ml(:CJ]N
Prunחi-1:ייtti�l'l{iOן:W, iוljt' -
יt;גlpai.n dcJliciU
r(,!ו r�t111r•Cי, ptיriגirlie iiוlגr cיondjtloוu, .וnd fGOI prol>Joem�
ti(,nt, �:ss.itti\tcdc"\ii,;;eיs,
N�uv]g�,;
pשn
ן;יג11,ןן wג-d,
L,;g יי1nd bMk y,רגn
rool:s.. arnd ncrvts �vilh aclivrtן·
1
piוו�נ �tenQ�iב-, mdi,;11li;ip• 1nje,;liQ1נ , .i;urg;י:יry, �nd
�l.hics., p-c"riphc:ral nc�וrnp�• תwic.נ.tH)ו;t!i
thllו$, nnd C S dr&nויd�;rt;
;md orlho-נeינ
Psychosocial and Environmental Assessment
Mobility limitations are influenced by psychological, social, and environmental factors (Table 50-9). Depression can have a powerful effect on the desire to be mobile. Fear of falling, lack of confidence, lower attention, and low self-efficacy can also adversely influence a person’s mobility. Screens for these conditions can include a single question or multiple questions (see Table 50-3) and should be part of a comprehensive mobility assessment. Apathy and lack of motivation are a common concern in geriatric rehabilitation. Formal and informal social support resources can be critical for the person with mobility limitations. Cultural and financial factors can influence attitudes toward disability and resources for addressing the problem. The safety and accessibility of the living environment can be a barrier or facilitator for persons with mobility problems. Self-report measures, such as the life-space mobility questionnaire, quantify how often and with what level of difficulty older adults engage with progressively wider environments (eg, leaving bedroom, going outside, traveling beyond neighborhood, etc.). A home visit for assessment can offer many opportunities for creative problem solving.
TABLE 50-9 ■ PSYCHOSOCIAL AND ENVIRONMENTAL ASSESSMENT AND MANAGEMENT OF MOBILITY
In recent years, mobile health technologies, such as wearable devices and smartphones, have become widely used to monitor physical activity in the person’s own environment. These devices could also be used to gain insights into spatiotemporal parameters of gait.
MANAGEMENT
Intervening Directly on Mobility
Interventions directed at functional limitations are often rehabilitative in nature and involve exercise, adaptive equipment, and environmental modifications. Mobility limitations can be addressed through mobility task practice and exercise to improve specific impairments in strength, balance, endurance, and/or flexibility. Deconditioning is almost always present as a direct consequence of reduced mobility and inactivity, and deconditioning has been found to be treatable in many older adults who are sick or frail.
General conditioning programs of exercise are frequently indicated. Task- specific exercises and assistive and orthotic devices can improve stability and reduce weight-bearing pain. The evidence for the effectiveness of rehabilitation and exercise interventions is growing and has been examined in older adults with varying levels of mobility limitations. In general, increasing the amount of time spent walking, even at moderate pace and in the absence of direct cardiovascular beneficial effects, can improve mobility and reduce disability. Recent studies suggest that a focus on neural control of walking through the development of progressively more complex motor skills may be especially effective in improving walking speed and the energy efficiency of walking. Chapter 55 on rehabilitation and Chapter 54 on exercise present these interventions in more detail.
Treating Impairments
Some impairments can be linked to diseases and pathologic processes that are amenable to medical treatment, and some impairments can be improved directly regardless of pathologic cause (see Table 50-8). Peripheral sensory disorders are often not correctable, but compensation can be achieved with lighting to improve visual information and increasing haptic feedback (for example, through use of a cane). Haptic perception is demonstrated by the remarkable decrease in sway with eyes closed, seen in persons with peripheral sensory or vestibular disorders, when they are allowed even minimal, nonsupporting contact with a stable surface such as a table, wall, or assistive device. Shoe inserts that increase sensory feedback are in development.
Progress in rehabilitation technology demonstrates some beneficial effects on recovery of mobility after a stroke or other injuries. For example, individuals with unilateral cerebral lesions after stroke may improve symmetric stepping in walking following split-belt treadmill training, in which the legs move at different speeds to encourage post-training even
stepping. Other technologies include body weight–supported treadmill training, although evidence indicates that robust physical therapy leads to similar benefits. These technological advancements can potentially revolutionize the approach to physical therapy for older adults.
The slowed gait of Parkinson disease can be responsive to medication although the balance disorder is not. Parkinson patients may move faster when medications are initiated and thus increase their risk of fall injury unless appropriate rehabilitative measures are coordinated. Likewise, current evidence indicates individuals with BPV benefit from a combination of the canalith repositioning maneuver and balance rehabilitation more than from the repositioning maneuver alone. From the lens of improving mobility limitations, most pharmacologic interventions work best in concert with rehabilitation.
Attention to Factors That Modify Behavior and Environment
The modifiable psychosocial factors that influence physical activity may offer opportunities to intervene (see Table 50-9). Depression can be managed medically or with psychotherapy. Social support and encouragement can be promoted through group activities. Exercise (ie, balance, strength training, Tai chi/dance) may reduce fear of falling without increasing the risk or frequency of falls. The beneficial effects of physical activity on mobility could also be indirect, for example because it promotes attention, alertness, mood, as well as increasing one’s motivation to move better and more. Physical environmental adaptations in the home include ramps and railings, bathroom modifications, proper lighting, and strategic placement of stable furniture items. Further modifications are often indicated in institutional settings.
Care for the Immobile Person
Interventions to reduce the consequences of immobility include determining the level of care need and living setting, training others to properly position and move the patient, implementing a mobilization plan, use of pressure- reducing devices to prevent pressure ulcer, and, sometimes, using equipment to aid in transfers (see Chapter 46). Persons who are responsible for carrying out transfers of immobile patients, including health aides and family caregivers, need training in proper techniques that reduce injury to the patient
and the assistant. Appropriate assistive devices, such as wheelchairs or powerchairs should be considered to increase environmental accessibility.
Absolute bed rest is almost never indicated and should be discouraged in all settings. Mobility-related activities such as scooting in bed, sitting up, and standing can promote improved physiologic function when walking is not feasible. Mobilization, including walking, has been shown to be feasible even in the intensive care setting and is associated with reduced intensive care unit delirium. An exception to routine mobilization might be for humanitarian reasons when an individual is actively dying, where routine mobilization might cause suffering without associated benefit. Mobilization, rather than bed rest, during hospitalization for acute illness has been one of the most consistently efficacious interventions in geriatric care units.
SUMMARY
Mobility disorders are widespread in older adults. Mobility limitations constrain many functions required for independent living and are powerful indicators of future problems. Mobility can be classified using simple screening. Evaluation starts with a triage function or simple measures like gait speed. Many common contributors to mobility limitations can be managed in the primary care setting. A comprehensive mobility evaluation is resource intensive and requires a multidisciplinary team. Evaluation and management include a biomechanical approach to function, a biomedical approach to the physiologic components of mobility, and a biopsychosocial and environmental approach to modifying factors.
FURTHER READING
Bevilacqua R, Maranesi E, Riccardi GR, et al. Non-immersive virtual reality for rehabilitation of the older people: a systematic review into efficacy and effectiveness. J Clin Med. 2019;8(11):1882.
Brach JS, VanSwearingen JM. Interventions to improve walking in older adults. Curr Transl Geriatr Exp Gerontol Rep. 2013;2(4).
Cohen JA, Verghese J. Gait and dementia. Handb Clin Neurol.
2019;167:419–427.
Ferrucci L, Bandinelli S, Benvenuti E, et al. Subsystems contributing to the decline in ability to walk: bridging the gap between epidemiology and geriatric practice in the InCHIANTI study. J Am Geriatr Soc.
2000;48:1618–1625.
Freedman VA, Spillman BC, Andreski PM, et al. Trends in late-life activity limitations in the United States: an update from five national surveys.
Demography. 2013;50:661–671.
Fried LP, Bandeen-Roche K, Chaves PH, Johnson BA. Preclinical mobility disability predicts incident mobility disability in older women. J Gerontol Med Sci. 2000;55A: M43–52.
Guralnik JM, Ferrucci L, Simonsick EM, et al. Lower-extremity function in persons over the age of 70 years as a predictor of subsequent disability. N Engl J Med. 1995;332:556–561.
Jorstad EC, Hauer K, Becker C, et al. Measuring the psychological outcomes of falling: a systematic review. J Am Geriatr Soc. 2005;53:501–510.
Kendrick D, Kumar A, Carpenter H, et al. Exercise for reducing fear of falling in older people living in the community. Cochrane Database Syst Rev. 2014;11:CD009848.
Li KZH, Bherer L, Mirelman A, et al. Cognitive involvement in balance, gait and dual-tasking in aging: a focused review from a neuroscience of aging perspective. Front Neurol. 2018;9:913.
Moskowitz S, Russ DW, Clark LA, et al. Is impaired dopaminergic function associated with mobility capacity in older adults?. Geroscience.
2021;43(3):1383–1404.
Pahor M, Guralnik JM, Ambrosius WT, et al. Effect of structured physical activity on prevention of major mobility disability in older adults: the LIFE study randomized clinical trial. JAMA. 2014;311(23):2387–2396.
Peel NM, Kuys SS, Klein K. Gait speed as a measure in geriatric assessment in clinical settings: a systematic review. J Gerontol A Biol Sci Med Sci. 2013;68(1):39–46.
Perera S, Mody SH, Woodman RC, et al. Meaningful change and responsiveness in common physical performance measures in older adults. J Am Geriatr Soc. 2006; 54:743–749.
Rosano C, Rosso AL, Studenski SA. Aging, brain, and mobility: progresses and opportunities. J Gerontol A Biol Sci Med Sci. 2014;69(11):1373– 1374.
Schaap LE, Koster A, Visser M. Adiposity, muscle mass, and muscle strength in relation to functional decline in older persons. Epidemiol Rev.
2013;35:51–65.
Studenski S, Perera S, Patel K, et al. Gait speed and survival in older adults.
JAMA. 2011;305(1):50–58.
VanSwearingen JM, Studenski SA. Aging, motor skill, and the energy cost of walking: implications for the prevention and treatment of mobility decline in older persons. J Gerontol A Biol Sci Med Sci.
2014;69(11):1429–1436.
Warmerdam E, Hausdorff JM, Atrsaei A, et al. Long-term unsupervised mobility assessment in movement disorders. Lancet Neurol.
2020;19(5):462–470.
Wennberg AM, Savica R, Mielke MM. Association between various brain pathologies and gait disturbance. Dement Geriatr Cogn Disord.
2017;43(3–4):128–143.
Chapter
Osteoporosis
Gustavo Duque, Mizhgan Fatima, Jesse Zanker, Bruce R. Troen
DEFINITION OF OSTEOPOROSIS
The term osteoporosis was first introduced in the nineteenth century based on histologic diagnosis (“porous bone”). Osteoporosis is a “disease characterized by low bone mass and microarchitectural deterioration of bone tissue leading to enhanced bone fragility and a consequent increase in fracture incidence.” Osteoporosis may also be defined either by the presence of a fragility fracture (a fracture resulting from a fall from standing height or less) or by bone mineral density (BMD) measurement. In defining BMD criteria for osteoporosis, the World Health Organization (WHO) used as the standard the BMD of young adult women who were at the age of peak bone mass. For each standard deviation below peak bone mass (or 1 unit decrease in T-score), a woman’s fracture risk approximately doubles. As seen in Table 51-1, a T-score less than −2.5 defines osteoporosis; osteopenia (low bone mass) and normal bone mass are also defined.
TABLE 51-1 ■ WORLD HEALTH ORGANIZATION CRITERIA FOR OSTEOPOROSIS
A BMD measurement may confirm the diagnosis of osteoporosis and indicates that interventions are needed prior to fracture in older adults. In addition, individuals with osteopenia could be still at risk of fractures. They, therefore, should be followed carefully for further bone loss while also promoting nonpharmacologic interventions that maintain bone health.
Although the original standards for definitions of osteoporosis were determined in White women, the standards for men and Hispanic women are similar to those of White and African-American women. However, defining osteoporosis solely by T-score does not effectively capture all patients at risk of a fracture. Greater than 50% of all hip fractures occur in those with T- scores that are better than −2.5. Failure to evaluate and treat such patients adds to the individual and societal cost and consequences of osteoporosis.
Therefore, we are still faced with the challenge of improving the identification of the individual patient at risk of fracture and subsequently optimizing both prevention and treatment for older adults.
Primary or idiopathic osteoporosis has been historically classified as postmenopausal or senile osteoporosis. Postmenopausal osteoporosis occurs in women between 51 and 75 years. It is related to estrogen deficiency seen with the menopausal transition, which associates with very high levels of bone resorption. In contrast, senile osteoporosis typically occurs in persons older than 60 years. It affects both men and women and has different pathophysiology, which involves reduced levels of bone turnover due to a reduction in the numbers of bone-forming cells (osteoblasts). Increasing evidence points to a progressive age-related alteration in stem cell physiology that favors adipogenesis and thereby reduces osteoblastogenesis and bone formation. Nevertheless, estrogen probably plays a role in the pathophysiology of senile osteoporosis as well. Secondary osteoporosis is the result of underlying conditions or medications that adversely affect bone. This chapter will focus on the typical characteristics of senile osteoporosis from its pathophysiology to therapeutic approaches.
Learning Objectives
To understand the features of osteoporosis in older persons
To identify fracture risk in older persons
To learn fracture prevention strategies in older persons
EPIDEMIOLOGY
Due to the increasing osteoporosis prevalence with age, the worldwide aging of the population and the changing lifestyle habits, the prevalence of osteoporosis has risen significantly and will continue to in the future. In 2018, the National Osteoporosis Foundation (NOF) announced that 54 million Americans, half of all adults age 50 and older, are at risk of breaking a bone and should be concerned about bone health; this means that approximately 10 million adults in the United States have osteoporosis, with an additional 43 million having low bone mass. In addition, women have more than 250,000 and 500,000 hip and spine fractures per year, respectively. Men account for an additional 250,000 fractures per year, of which 75,000 are hip fractures.
On reaching the age of 90, one-third of women and one-sixth of men will suffer a hip fracture. Both women and men have a similar lifetime vertebral fracture risk of 12%. The consequences of osteoporotic fracture include diminished quality of life (QoL), decreased functional independence, and increased morbidity and mortality. Pain and kyphosis, height loss, and other changes in body habitus resulting from vertebral compression fractures diminish the QoL in women and men. These changes lead to declines in functional status, such as the inability to bathe, dress, or ambulate independently and decrease pulmonary and gastrointestinal function.
Approximately 50% of women do not fully recover prior function after hip
Key Clinical Points
Both older men and women are at risk of osteoporotic fractures.
Fracture risk assessment, including clinical factors, should be performed in every person older than 65.
Calcium and vitamin D should be an essential component of any osteoporosis treatment.
Antiresorptives (bisphosphonates and denosumab) and anabolics (teriparatide and romosozumab) are effective and safe treatments for osteoporosis in older persons.
fracture; older adults have 20% to 25% mortality in the year following hip fracture. Indeed, men are at a higher risk of dying after a hip fracture than women. Osteoporosis-related bone breaks cost patients, their families, and the health care system. The estimated annual cost of osteoporotic fractures in the United States is more than $22 billion, and by 2025 the NOF predicts that osteoporosis will be responsible for 3 million fractures costing $25 billion annually, which is higher than the money spent treating cardiovascular disease. Therefore, prevention and early diagnosis and treatment of osteoporosis are vital to improving the health of older adults.
PATHOPHYSIOLOGY
New advances in understanding bone physiology have elucidated an active interaction among bone and bone marrow cells, growth factors, and hormones responsible for maintaining calcium levels, skeletal structure, and resistance to trauma. Bone is not simply a mineralized structure but a complex system of cell–cell, cell–matrix, and cell–hormone interactions influenced by genetic background, lifestyle, and diet.
Bone is composed of inorganic (calcium phosphate crystals) and organic compounds (90% collagen and 10% noncollagenous proteins).
Noncollagenous proteins include albumin, osteopontin, osteocalcin, α2-HS- glycoprotein, and growth factors, constituting the so-called bone matrix. The bone matrix is produced by osteoblasts and is the environment in which bone and external factors interact in a well-coordinated manner. There are two types of bone: cortical and trabecular. Cortical bone predominates in the long bones of the extremities, while trabecular bone predominates in the vertebrae and pelvis and makes up 80% of skeletal mass. While both types of bone have an active remodeling process, trabecular bone is metabolically more active than cortical bone and more acutely responsive to alterations in sex steroid hormone status. The bone marrow is also a complex environment in which bone cells interact with hematopoietic and marrow adipose tissues (Figure 51-1), playing an essential role in regulating bone turnover.
FIGURE 51-1. Components of bone structure. Computed tomography (CT) images of osteoporotic bone (vertebrae and proximal femur) from a 70-year-old woman analyzed using a specialized image analysis software (Tissue CompassTM) that depicts bone toward marrow, left to right. Cortical and trabecular bone is illustrated in blue. Note that the marrow is occupied mainly by fat (yellow) at the expense of hematopoietic (red) marrow.
During childhood and adolescence, skeletal growth occurs at growth plates, areas in which cartilage proliferates and gradually undergoes calcification, resulting in new bone formation. However, bone remodeling is a lifelong process that maintains bone to harbor bone marrow, support the body, protect vital organs, and provide a source of minerals. Remodeling replaces older, frailer bone with newer, more resilient bone in an organized manner. The end product of remodeling is the maintenance of skeletal homeostasis and the preservation of anatomical integrity. With aging or with menopausal transition, the once-coordinated mechanism of bone remodeling with balanced formation and resorption becomes uncoupled, leading to bone loss and increased risk of fracture.
BONE CELLS
The cells involved in bone turnover are osteoclasts, osteoblasts, and osteocytes (Figure 51-2). Osteoclasts are macrophage-like cells that secrete proteolytic enzymes and hydrogen ions required to remove the deposited
bone matrix. The remodeling cycle begins when osteoclast precursors interact with other marrow cells and are activated, becoming multinucleated osteoclasts, which initiates resorption. Bone resorption occurs within the resorption lacuna, a tightly sealed zone beneath the ruffled border of the osteoclast where it has attached to the bone surface. Resorption depends on acidification of this extracellular compartment leading to demineralization. Subsequently, the organic matrix is degraded by cysteine proteases, chief of which is cathepsin K. Osteoclasts consequently create a functional extracellular lysosome, containing both an acidic environment and specific lysosomal enzymes.
FIGURE 51-2. The cellular components of bone turnover. After the expression of specific transcription factors, mesenchymal precursors differentiate into osteoblasts. In contrast, osteoclasts differentiate from mononuclear precursors and act as bone-resorbing cells in the bone multicellular unit. After the completion of bone resorption, osteoclasts undergo apoptosis and are replaced by active osteoblasts responsible for forming new bone. Osteoblasts finally end as lining cells, as osteocytes embedded into the osteoid, or undergo apoptosis. Osteocytes are neuron-like cells representing end-stage osteoblast that have become embedded in the bone matrix (osteoid). (Reproduced with permission from Al Saedi A, Stupka N, Duque G. Pathogenesis of Osteoporosis. Handb Exp Pharmacol. 2020;262:353–367.)
In cortical bone, the resorption period lasts approximately 30 days; the final result is a resorption tunnel that osteoblasts will later fill in in a haversian manner, in which plates of bone are laid down in layers of concentric rings around a central channel. The apposition of these haversian canals takes the shape of a “cut onion,” which gives the cortical bone its typical morphology. In trabecular bone, the erosion period lasts approximately 43 days, resulting in a trench between the trabeculae. The life span of osteoclasts is around 2 weeks; once these cells complete their role as bone-resorbing cells, they undergo apoptosis or programmed cell death.
Osteoblasts are fibroblast-like cells derived from pluripotent mesenchymal cells that localize on periosteal surfaces (Figure 51-2). Such pluripotent stromal cells can be induced to differentiate along the osteoblastic, adipocytic, fibroblastic, or chondrocytic lineages when required. When bone integrity has to be conserved, mesenchymal stromal cells are committed toward the osteoblastic lineage. Many factors are involved in the process of osteoblastogenesis (Figure 51-3), including the bone morphogenic protein family; bone morphogenic proteins 2, 4, and 7 are potent inducers of osteoblast differentiation. A transcription factor called Runx2/Cbfa1 plays a crucial role in osteoblast differentiation; mice lacking Runx2/Cbfa1 do not form bone. A mature osteoblast is a cuboidal cell with a large nucleus and enlarged Golgi highly enriched in alkaline phosphatase. It produces type I collagen and specialized bone-matrix proteins such as osteoid, the primary protein for further bone formation and mineralization.
Osteoblasts produce alkaline phosphatase, the specific function of which has
yet to be determined. Nevertheless, it is used as a marker of osteoblast differentiation and activity and indirectly as a marker of subsequent osteoclast resorption. Mice lacking functional alkaline phosphatase suffer from hypophosphatasia characterized by impaired mineralization of cartilage and bone matrix. After osteoblasts complete their bone-forming function, they face one of three fates: (1) they become embedded in the newly formed matrix, becoming osteocytes; (2) they remain on the surface of the newly formed bone and become lining cells; or (3) they undergo apoptosis.
Hormonal changes, the presence or absence of growth factors, inflammatory conditions, and the aging process in bone determine the ultimate fate of the osteoblast.
FIGURE 51-3. Factors that regulate bone cell differentiation and bone turnover. The Wnt signaling pathway is the most critical stimulator of osteoblastogenesis. Wnts activate β catenin, which translocates to the nucleus and stimulates the expression of osteogenic transcription factors such as RUNX2. Intermittent exposure to parathyroid hormone also has an osteogenic effect. Osteocytes regulate osteoblastogenesis via two major inhibitory factors, sclerostin and DKK1. Osteoclastogenesis is stimulated by the receptor activator of nuclear factor kappa-Β ligand (RANKL), which is secreted by stromal cells and mature osteoblasts. Inflammatory factors such as TNFα and interleukins induce RANKL expression. In addition, adipocytes secrete adipokines and fatty acids that cause osteoblast and osteocyte apoptosis and induce osteoclast differentiation and activity. (Reproduced with permission from Feehan J, Al Saedi A, Duque G. Targeting fundamental aging mechanisms to treat osteoporosis. Expert Opin Ther Targets. 2019;23[12]:1031–1039.)
In contrast, osteoclasts belong to the macrophage lineage and express multiple very potent degradative enzymes. Osteoclast differentiation, formation, and, to a lesser degree, activation depend upon the proximity and products of the osteoblast (Figure 51-3). Without exception, the fate of the osteoclasts is to die by apoptosis.
Osteocytes constitute the third group of bone cells that are involved in bone metabolism. These cells are the most abundant cell type in bone and are the focus of intense research. Osteocytes are postmitotic terminally differentiated osteoblasts that are entrapped within the new bone matrix.
Once considered inert, these cells are now recognized as key regulators of skeletal metabolism, mineral homeostasis, and hematopoiesis. Osteocytes are the critical responders to mechanical forces and orchestrators of bone remodeling and mineral homeostasis (Figure 51-3). Although osteocytes are
entombed within their hosting lacunae, they are not isolated and instead maintain close communication with other cells and micro environments through a complex network of channels (canaliculi) in which osteocyte projections (cilia) are in close contact with blood vessels. Several functions attributed to osteocytes include the synthesis of matrix molecules such as osteocalcin and an essential role in direct communication with surface osteoblasts through molecules known as connexins. Two of those connexins, sclerostin and DKK1, are potent inhibitors of osteoblast differentiation and function and play an important role in the activation and regulation of bone metabolism in response to physiologic and mechanical stimuli. These modulate the response of bone during functional adaptation of the skeleton to mechanical forces and the need for repair of microdamage. Any mechanical force applied on the bone (ie, exercise) will have an inhibitory effect on sclerostin and DKK1, thus facilitating osteoblast differentiation and function. Osteocytes are very long-lived cells with a half-life of 25 years, after which most undergo apoptosis.
BONE TURNOVER
Bone homeostasis depends on the intimate coupling of bone formation and bone resorption. After osteoclasts resorb bone, preosteoblasts differentiate into osteoblasts and migrate to the area of excavated bone and begin to deposit osteoid, which is eventually mineralized into new bone. Osteoclasts and osteoblasts belong to a temporary structure known as a basic multicellular unit (see Figure 51-2). The coordinated process of bone resorption and formation by the basic multicellular unit lasts 6 to 9 months and results in newly mineralized bone. Osteoblasts are not only active as bone-forming cells, but these also play an important role in the regulation of osteoclast activity. The interaction between osteoblasts and osteoclasts requires a complex system of factors facilitated by integrins and cadherins (see Figure 51-4). Briefly, the primary osteoclast/osteoblast interaction depends on the expression by osteoclast precursors and mature osteoclasts of a membrane receptor known as receptor activator of nuclear factor-κ B (RANK), which belongs to the family of tumor necrosis factor (TNF) receptors. Osteoclast differentiation, maturation, and survival depend on RANK activation by its cognate ligand (RANK ligand—RANKL), which is produced by osteocytes, osteoblasts, and osteoblast precursors after
exposure to different stimuli such as hormones and cytokines (Figure 51-4). Multiple other factors also act to either enhance or suppress osteoclast formation and activation and subsequent bone resorption (see Table 51-2).
TABLE 51-2 ■ LOCAL FACTORS REGULATING BONE CELL INTERACTION AND ACTIVITY
FIGURE 51-4. Osteoblast–osteoclast coupling and the regulation of RANK ligand expression. Osteoblast production of M-CSF and RANKL play critical roles in the differentiation and activation of osteoclasts. M-CSF acts to maintain monocytic stem cell survival, and subsequently, RANKL acts to commit the cell toward osteoclast differentiation, fusion, polarization, and activation. EphB4 and ephrinB2 interact both to limit osteoclast activity and stimulate osteoblast differentiation. TGF-β acts only upon release from the extracellular matrix after osteoclastic resorption, which is mainly mediated by the excretion of CTSK. BMP-2, bone morphogenetic protein-2; CTSK, cathepsin K; M-CSF, macrophage colony-stimulating factor; PDGF, platelet-derived growth factor; RANKL, RANK ligand; TGF-β, transforming growth factor-β.
RANKL is mainly a cytoplasmic membrane-bound molecule; to a lesser extent, it is secreted. Mature osteoblasts and osteocytes also produce a decoy receptor for RANKL called osteoprotegerin (OPG). OPG competitively binds to RANKL and prevents the interaction between RANK and RANKL, thus decreasing osteoclastogenesis and osteoclastic bone resorption and increasing osteoclast apoptosis. More recently, a group of molecules known as ephrins has been identified as key players in regulating osteoblast/osteoclast interaction. This cellular communication is bidirectional and involves a trans-membrane ligand known as ephrinB2, expressed by osteoclasts, and its receptor EphB4 expressed by osteoblasts
(see Figure 51-4). This signaling seems to limit osteoclast activity while enhancing osteoblast differentiation. Consequently, osteoblastogenesis and osteoclastogenesis, along with corresponding bone formation and resorption, are tightly and ineluctably coupled. The differentiation and activation of both osteoblasts and osteoclasts depend critically on each other; however, recent evidence indicates that osteocytes also play an essential role in bone turnover by regulating osteoblast function and survival as well as osteoclast function; all modulated by hormones, growth factors, and mechanical forces.
GENETICS
Genetics plays a role in the determination of peak bone mass. Racial differences in the incidence of osteoporosis have been reported, including a lower relative risk of fractures and higher peak bone mass in African- American women compared with White women. No single gene, gene product, or polymorphism has yet been credibly identified to account for the variance seen in BMD in specific geographic areas. Several environmental factors, such as diet, topography, and yearly sunlight exposure, almost certainly interact with a genetic predisposition to explain the variance seen in periosteal expansion before puberty and trabecular number and thickness and periosteal-endosteal remodeling during aging. Candidate genes for determining peak bone mass are the vitamin D receptor, vitamin D binding protein, the peroxisome proliferator activator gamma, the Jagged 1 gene, and the low-density lipoprotein receptor-related protein 5. All these polymorphisms have been associated with different levels of peak of bone mass and predisposition to fractures in adulthood. However, multiple studies have shown that BMD and fracture predisposition are complex traits controlled by multiple genetic loci. More generally, there does appear to be a familial predisposition to osteoporotic fracture. Therefore, the fracture risk increases if an immediate family member (most typically a mother or sister) has experienced an osteoporotic fracture.
MECHANICAL FACTORS
Approximately 95% of peak adult bone mass is gained by the end of puberty. The level of peak bone mass attained and the subsequent rate of bone loss are the primary factors that determine an individual’s bone mass in early and late adulthood. Initial bone formation does not require a mechanical stimulus, but
further appositional and endochondral growth is dependent on the mechanical forces generated by the muscles. The magnitude of this loading is directly related to body mass and physical activity. There is some evidence that after mechanical load, microfractures may occur in bone, with subsequent activation of interleukins (ILs) and growth factors, thereby regulating bone turnover and formation. In addition, osteocytes play an important role in the response to mechanical stress by stimulating bone turnover and facilitating bone formation through the release of RANKL and the inhibition of sclerostin and DKK1.
LOCAL FACTORS
Local factors are important in regulating bone turnover and in the interaction between bone matrix and systemic factors and hormones (see Table 51-2 and Figures 51-2 to 51-4). The skeleton responds to mechanical forces by several regulatory mechanisms, including the release of cytokines, such as macrophage colony-stimulating factor (M-CSF) and granulocyte colony- stimulating factor regulating cell differentiation. Mediators and regulators of cell–cell interaction include insulin-like growth factor (IGF)-1 and IGF-2, parathyroid hormone (PTH)-related peptide, IL-1, IL-6, and TNF-α. In addition, TNF-α, IL-6, IL-1, and prostaglandins largely mediate the response to sex-steroid hormones. Although high levels of these factors are necessary for osteoblast–osteoclast regulation and pathogenesis of osteoporosis, their usually stable systemic levels suggest that alterations in local secretion and concentration are critical to bone physiology. These local factors largely determine the activation or inhibition of bone cells, cell recruitment, cell differentiation, and life span.
SYSTEMIC HORMONES
A number of systemic hormones affect bone metabolism, including vitamin D, PTH, calcitonin, and sex-steroid hormones (estrogens and androgens) (see Table 51-2). The major effect of vitamin D is to maintain calcium homeostasis by increasing the efficiency of the small intestine in absorbing dietary calcium. Vitamin D also plays a role in bone resorption by inducing RANKL expression by osteoblasts, thereby inducing osteoclast differentiation and activation and subsequent bone formation by stimulating osteoblastogenesis and inhibiting apoptosis of mature osteoblasts.
Hypovitaminosis D, widespread in older adults, is associated with lower BMD, frequent falls, and more osteoporotic fractures.
The parathyroid glands secrete PTH through a calcium sensor mechanism. When calcium levels decrease, PTH is released and exerts its function on two primary target tissues: kidney and bone. In the kidney, PTH acts on the proximal tubule to reduce PO4 resorption and to increase the
activity of 1-α-hydroxylase, the enzyme that converts 25(OH)-vitamin D to 1,25(OH)2-vitamin D3, the active form of vitamin D. In bone, PTH increases osteoclast-induced bone resorption by inducing RANKL expression and
subsequent signaling via RANK. Hypovitaminosis D is often, but not always,
accompanied by elevated PTH—secondary hyperparathyroidism. Acute and cyclical exposure to PTH in bone has an antiapoptotic as well as an anabolic effect on osteoblasts. This is the basis for using PTH to treat severe osteoporosis (see further discussion below). Calcitonin is a hormone secreted by thyroidal C cells in mammals. Its main biologic effect is the inhibition of osteoclastic bone resorption. In vitro and in vivo studies in animals demonstrate that calcitonin causes the osteoclast to shrink and retract from the bone surface, decreasing its bone-resorbing activity and enhancing bone-forming osteoblasts.
Sex-steroid hormones play a variety of roles in bone turnover. Although some aspects of its effects remain unclear, estrogen increases the level of OPG, inhibiting osteoclastogenesis. Estrogen also induces osteoclast apoptosis and regulates the action of IL-1, IL-1 receptor antagonist (IL-1Ra), IL-6, and TNF-α, and their binding proteins and receptors. Declining estrogen levels lead to increased expression of IL-1, IL-6, and TNF-α, all of which enhance bone resorption. In response to diminished estrogen, osteoblasts produce more RANKL and less OPG, which induces RANKL– RANK interaction and signaling, further stimulating osteoclast differentiation and activation. Since estrogen increases osteoblast differentiation and decreases osteoblast apoptosis, bone formation declines at the time of menopause. Overall, there is a high turnover state with predominant bone resorption, which results in bone loss and susceptibility to fractures.
Androgens play an important role in the formation of adolescent bone by regulating cytokines in the bone matrix. The effect of progesterone on bone seems to be indirect and limited through its regulation of calcitonin secretion and thus bone resorption.
Women are at higher risk of osteoporosis because they have lower peak bone mass than men and experience accelerated bone loss during menopause, as described above. Histomorphometric data on the skeletal changes associated with postmenopausal bone loss show increased bone turnover in both cancellous and cortical bones. Biochemical markers also reflect high bone resorption after menopause. These markers return to normal with estrogen replacement. Trabecular bone is affected earlier in menopause than cortical bone because it is more metabolically active. Thus, rapid bone loss is seen primarily in the spine (3% per year) for approximately 5 years after menopause. Subsequently, there is a slower rate of bone loss that is more generalized (> 0.5% per year at many sites). A consistent finding in untreated postmenopausal women is a reduction in wall width of bone, indicating decreased osteoblast activity. Although this could be related to the loss of the antiapoptotic effect of estrogen on osteoblasts, studies are inconclusive.
AGE-RELATED MECHANISMS OF OSTEOPOROSIS
Age-related bone loss is a complex phenomenon, with many factors involved in its pathogenesis (see Figures 51-4 and 51-5). As individuals age, distinct changes occur in trabecular bone, cortical bone, and bone marrow. The onset and triggers of age-related bone loss are still not fully defined. However, densitometric studies show a slow and progressive decline in BMD after the third decade of approximately 0.5% per year, even though serum levels of estrogens are still within the normal range. With aging, osteoblastogenesis decreases, resulting in lower numbers of osteoblast precursors and increasing bone marrow adiposity (see Figures 51-1 and 51-4). The bone marrow of a young individual is virtually devoid of adipocytes. However, in older adults, adipose deposits may occupy up to 90% of the bone marrow cavity.
FIGURE 51-5. The role of fat in aging bone. Increasing bone marrow fat levels observed in aging bone are associated with the local secretion of lipotoxic factors (fatty acids and adipokines), reducing osteoblast differentiation and inducing apoptosis in osteoblasts and osteocytes. At the same time, they also stimulate osteoclast differentiation and activity.
Pluripotent mesenchymal cells within the bone marrow stroma are, by default, programmed to differentiate into adipocytes, but the presence of specific osteogenic factors in the bone marrow induces osteoblastic differentiation. With aging, those osteogenic factors are decreased, generating a predominant adipocyte differentiation of those precursors. In addition, osteoblast and osteocyte apoptosis increase with aging.
Histomorphometric data demonstrate that 50% to 70% of the osteoblasts present at the remodeling site cannot be accounted for after the enumeration of lining cells and osteocytes. The discrepancy in osteoblast numbers is believed to be a consequence of osteoblast apoptosis. This phenomenon may account for the significant reduction in bone formation associated with aging, which is added to high levels of marrow adipogenesis.
In addition, increasing marrow fat levels directly negatively affects bone metabolism by regulating the function and survival of bone cells. By secreting fatty acids and adipokines, marrow adipocytes inhibit osteoblast differentiation, function, and survival and osteocyte survival. This effect has been described as lipotoxicity, defined as the ectopic accumulation of lipid and lipid products in nonadipose tissues leading to cellular dysfunction, cell death (lipoapoptosis) and disease. Additionally, adipocyte-secreted factors (primarily fatty acids) affect autophagy, defined as the conserved process
whereby aggregated proteins, intracellular pathogens, and damaged organelles are degraded and recycled. Autophagy appears to play a significant role in skeletal maintenance after recent reports reveal that suppression of autophagy in osteocytes mimics skeletal aging. Furthermore, marrow adipocytes induce osteoclastic activity by facilitating the release of RANKL into the bone marrow milieu, thus stimulating bone resorption in addition to decreasing bone formation (Figure 51-5).
The early changes associated with age-related bone loss are similar in men and women, as described above. However, women also experience accelerated bone loss of approximately 3% to 5% per year during menopause. In men, the decline in bone mass is gradual until very late in life, when the risk for fractures increases rapidly. Concurrent with osteoblast and adipocyte formation changes, multiple factors enhance osteoclastogenesis and bone resorption (see Figure 51-4). In particular, the interactions between osteoblasts, osteocytes, and osteoclasts, crucial to the dynamic equilibrium that maintains healthy bone, are altered. Consequently, the combination of decreased bone formation and increased bone resorption leads to diminished BMD, more flawed bone structure and quality, and, ultimately, enhanced fragility and fractures.
Muscle-secreted factors could also explain the cellular changes observed in aging bone. There is growing evidence that a complex bone/muscle cross- talk system exerted via osteokines and adipokines plays a critical regulatory role in bone metabolism (Figure 51-6). This cross-talk is affected by aging and other factors such as hormones and inactivity, associated with reduced osteogenic myokines levels. In addition, adipokines and fatty acids are also involved in this cross-talk by affecting bone and muscle structure and function. Overall, this growing evidence on the communication and close interaction between muscle and bone allowed us to propose the term osteosarcopenia as a new geriatric condition in which both osteopenia/osteoporosis and sarcopenia (loss of muscle mass, function, and strength) simultaneously occur in the same subject, increasing their risk of falls and fractures.
FIGURE 51-6. Muscle-bone cross-talk (myokines, osteokines, adipokines) and the pathophysiology of osteosarcopenia. (Reproduced with permission from Kirk B, Zanker J, Duque G. Osteosarcopenia: epidemiology, diagnosis, and treatment-facts and numbers. J Cachexia Sarcopenia Muscle. 2020;11[3]:609–618.)
In addition to cellular changes, there are two major changes in calciotropic hormones that impact aging bone. Vitamin D levels decrease with age and reduce calcium absorption. Changes in the aging skin lessen the amount of 7-dehydrocholesterol, the precursor of cholecalciferol (vitamin D3), and its conversion rate. Furthermore, declining renal function leads to a
reduction in the production and activity of 1-α-hydroxylase, the enzyme responsible for the activation of vitamin D3. Consequently, a negative calcium balance ensues, which activates the calcium sensor receptor in
parathyroid glands. PTH is secreted as a physiologic response, stimulating
osteoclast activity, maintaining normal serum calcium levels at the expense of bone mineralization. This theory of secondary hyperparathyroidism was once the definitive explanation for age-related bone loss. However, not all individuals with hypovitaminosis D exhibit secondary hyperparathyroidism. Therefore, it is just one of the elements of a syndrome that results in
osteoporosis in older adults. However, this mechanism has been recently associated with additional important risk factors for fractures: sarcopenia and falls. Vitamin D and PTH appear to modulate neuromuscular function, particularly in frail older adults. Serum levels of 25(OH)-vitamin D lower than 35 nmol/L increase the risk of falls by 30%, which highly predisposes to fractures. Patients with serum levels between 35 and 80 nmol/L, which were considered normal in the past, are still at risk of falls, suggesting that the therapeutic goal should be to obtain serum levels greater than 80 nmol/L.
In summary, age-related bone loss results from changes at the cellular level, including decreased osteoblastogenesis, shortened osteoblast and osteocyte life span, increased adipogenesis and lipotoxicity, simultaneous occurrence of sarcopenia, and hormonal changes, including decreased levels and activity of sex-steroid hormones and vitamin D, and increased levels and activity of PTH.
OSTEOPOROSIS IN MEN
Although the pathophysiology of osteoporosis in men has been a subject of active research in recent years, the relative contribution of hormones and aging, per se, remains to be elucidated. It is well established that androgen levels decrease with aging. Testosterone levels decrease by approximately 1.2% per year, and the binding protein levels increase with aging, resulting in lower bioavailable testosterone. There is evidence that androgens exert their effect on bone through the action of IGF-1. IGF-1 levels are increased during puberty and are closely related to sex-steroid levels.
With aging, lower levels of sex-steroid hormones result in decreased levels of IGF-1, with a reduction in bone formation and bone mass.
Dehydroepiandrosterone, another androgen, declines slightly in the sixth decade without significant changes after that. Contradictory evidence is available about the importance of this decline in dehydroepiandrosterone and its administration in treating male osteoporosis. Thus, osteoporosis in men appears to result from cellular and hormonal changes, including lower levels of testosterone, dehydroepiandrosterone, and IGF-1 with subsequent lower osteoblast activity and higher osteoblast apoptosis. However, further study is necessary to delineate the specific roles of these factors in the decline of BMD and the high rate of fractures in men after the seventh decade of life.
Case reports of low bone mass and increased bone turnover in men with estrogen deficiency—either from an estrogen receptor abnormality or an
absence of aromatase, the enzyme responsible for converting testosterone to estrogen—suggest that estrogen is required for normal bone homeostasis in men. Serum estrogen levels better predict BMD in men than do serum testosterone levels. In older men in whom both gonadotropin secretion and aromatase conversion are suppressed, estrogen acts as the principal sex- steroid-regulating bone resorption. Blocking the conversion of testosterone to estrogen using an aromatase inhibitor has been shown to increase bone resorption in a short-term study conducted in healthy older men, further supporting a role for estrogen in bone metabolism. Some of the effects of testosterone on bone may be mediated through aromatization of testosterone to estrogen, a possibility that warrants further study.
PRESENTATION
Osteoporosis is frequently underdiagnosed and undertreated by medical professionals. Osteoporosis is a silent disease, and symptoms may not appear until an incident fracture. Both men and women can have osteoporosis (as characterized by low BMD according to the WHO criteria) prior to a fracture, which is why it is so important to consider clinical risk factors, use risk identification tools (see further), and perform BMD measurement in those at risk of osteoporosis. Osteoporosis may be detected on plain x-rays (usually a chest x-ray) either by the presence of vertebral fractures or by “osteopenia” in the x-ray report. As many as a third or more of those with “osteopenia” on an x-ray may have T-scores worse than −2.5, and as many as half will have T-scores in the −2.5 to −1.0 range. Therefore, persons who are diagnosed with “osteopenia” by plain x-ray are candidates for BMD measurement. Osteoporosis may also present as an acute fracture. Most fractures that occur in old age are caused, at least in part, by osteoporosis, and it is crucial to initiate a therapeutic regimen in these adults.
Even after a minimal trauma fracture, the diagnosis is often not considered. Three-quarters of postmenopausal women with a distal radius fracture were either undiagnosed or not treated in one study. As many as 50% of women with a hip fracture leave the hospital without treatment. The overall risk of repeat fracture within the first year is 20%. Fractures that are likely related to osteoporosis and thus should trigger therapy with an approved agent include wrist, vertebral, and hip fractures. Frequently, these fractures are classified as fragility fractures because there is often little or no trauma associated with the event. Those with such fractures do not require
BMD testing, although a baseline BMD is usually helpful to assure treatment adherence and response.
In older persons, several clinical findings could indicate the presence of vertebral fractures. This includes height loss and progressive kyphosis. It is recommended that older persons be assessed routinely and that simple measurements such as occiput/wall distance be performed to identify those patients with asymptomatic vertebral fractures.
SECONDARY CAUSES OF OSTEOPOROSIS
The diagnosis of primary osteoporosis is made by BMD measurement before fracture or by incident fracture. Secondary osteoporosis is the consequence of diseases or drugs affecting bone directly (involving changes in bone cells or bone matrix composition) or indirectly (by increasing endogenous or ectopic hormonal production). It is important to exclude diseases that may present as a fracture or low BMD in evaluating women and men with osteoporosis. Table 51-3 lists the major secondary causes of osteoporosis along with laboratory tests used to exclude each disease. These laboratory tests should be considered in persons who present with acute compression fracture or who present with a diagnosis of osteoporosis by BMD measurement, particularly in those with Z scores below 2 SD. Men are more likely to have a secondary cause of osteoporosis than are women. The most commonly reported secondary causes of osteoporosis in men include hypogonadism and malabsorption syndromes. An additional secondary cause of osteoporosis in men relates to using luteinizing hormone-releasing hormone agonists in prostate cancer. Luteinizing hormone-releasing hormone agonists suppress the pituitary gland, decrease testosterone and estrogen to castrate levels, and render men at increased risk of osteoporosis. Several retrospective studies have found increased fracture rates in this population of men. Many studies demonstrate rates of bone loss that are up to three- to fourfold higher in men treated with luteinizing hormone releasing hormone agonists, compared with annual rates of bone loss in normal aging men.
TABLE 51-3 ■ RECOMMENDATIONS FOR EVALUATION OF SECONDARY CAUSES OF OSTEOPOROSIS
Medications also may have a detrimental effect on bone. Consideration should be given to dose adjustment, discontinuation of the drugs, or preventive treatment. Medications that adversely affect BMD include glucocorticoids, proton pump inhibitors (PPIs), excess thyroid supplementation, anticonvulsants, methotrexate, cyclosporine, and heparin. In
older adults, glucocorticoids, PPIs, and thyroid hormone are used quite commonly; accordingly, clinicians should consider the effects of these medications on the already increased fracture risk when prescribing these to older adults.
The prevalence of osteoporosis in adults taking glucocorticoids is approximately 30%. Bone loss typically occurs in the first 6 months of therapy, usually associated with doses of ≥7.5 mg/d administered for longer than 3 months. The risk also increases with increasing glucocorticoid dose. Glucocorticoids both suppress bone formation through direct effects on osteoblasts and increase resorption through indirect effects on osteoclasts. Glucocorticoid-induced osteoporosis is preventable if treatment is considered when therapy with corticosteroids is initiated. Replacement of gonadal hormones and treatment with anti-resorptives (bisphosphonates and denosumab) or intermittent PTH has been shown to prevent bone loss in patients taking glucocorticoids. Other measures for preventing bone loss are calcium and vitamin D supplementation and reduction of glucocorticoid dose to the lowest effective dose for the underlying disease.
In most cases, secondary osteoporosis can be either prevented or treated if suspected by the clinician. Immobilization predisposes to bone mineral loss and osteosarcopenia; thus, a program of early mobilization of hospitalized older patients is essential. Mild-to-moderate vitamin D deficiency may give rise to osteoporosis rather than osteomalacia; oral replacement may prevent its occurrence. Finally, a comprehensive medication review could also identify those medications placing the patient at risk of osteoporosis; thus, adjusting their doses or reevaluating their indications constitutes the most appropriate approach.
EVALUATION
Risk Identification
While approaches to the patient with osteoporosis have often based treatment on T-scores, assessing clinical risk factors can facilitate early identification of individual patients who are more likely to suffer from vertebral and nonvertebral fractures. This is particularly important, since most fractures occur in postmenopausal women with T-scores that are better than −2.5. The age of the patient is the most critical contributor to fracture risk. Additional important factors include a previous fracture history as an adult, history of
fracture risk in a first-degree relative, body weight less than 127 lb, current history of smoking, and corticosteroid use for more than 3 months (Table 51- 4). Impaired vision, early estrogen deficiency, dementia, frailty, recent falls, low calcium and vitamin D intake, low physical activity, and alcohol consumption of more than two drinks per day are additional clinical risk factors. Prior recent fracture is a robust predictor of future fracture. The increased risk is similar in both men and women and is the same as the risk of the first fracture in a woman who is 10 years older. Half of the patients will refracture within 10 years, and half of those will occur within 2 years of the first fracture. Therefore, most older patients with a prior fracture are candidates for treatment.
TABLE 51-4 ■ RISK FACTORS FOR OSTEOPOROTIC FRACTURE
Identifying patients at risk of osteoporosis and osteoporotic fractures should be routine practice in geriatric medicine. The presence of risk factors (see Table 51-4) has a very high predictive value for osteoporotic fractures,
especially in older persons. However, except for age, which has the highest predictive value, these risk factors have a different weight in predicting the absolute risk of future fractures. Therefore, to help the clinician accurately calculate a patient’s risk of suffering a fracture within 5 or 10 years, two online assessment tools have been widely validated.
The FRAX index (http://www.shef.ac.uk/FRAX) estimates the absolute risk of suffering a fracture in 10 years. This calculator has been demonstrated to be extremely useful since it includes specific data obtained from large cohorts worldwide. However, a major limitation of this tool is that falls are not included in the algorithm. Considering that falls are an important risk factor for fractures, the FRAX tool could be underestimating the level of risk in frequent fallers, particularly in frail older adults. In contrast to the FRAX tool, the Garvan tool (http://www.garvan.org.au/bone-fracture-risk) includes a history of previous falls in its algorithm. Although it has been recently tested in non-Australian populations, this tool was initially developed and validated using the Dubbo Osteoporosis Study database. Another major advantage of this tool is that it calculates fracture risk at 5 and 10 years, providing valuable information that could be easily shared with the patient.
Since many osteoporotic fractures result from falls (Chapter 43) or the simultaneous presence of sarcopenia (Chapter 49), it is essential to assess patients for fall risk and sarcopenia and institute preventive measures where appropriate. Figure 51-7 proposes a practical diagnostic algorithm to identify osteoporosis and sarcopenia in clinical practice. The risk factors for osteoporosis and sarcopenia are almost identical; thus, identification of secondary causes of osteoporosis should trigger the assessment for the presence of sarcopenia. The causes of falls are often multifactorial and include medications, poor vision, impaired cognition, maladaptive devices, alcohol, orthostatic hypotension, impaired balance and gait, environmental hazards, and lower extremity weakness. Recent studies suggest that specific performance measures can help to identify those at greater risk of falling.
Individuals who cannot maintain a semi tandem stand for 10 seconds with their eyes open are at increased risk. A gait velocity of less than 0.8 m/s also predicts a greater propensity to fall. Additional office-based screening tests that identify potential fallers include the inability to complete the timed up and go test in 14 seconds, the inability to maintain a one-leg stand for at least 5 seconds, and a score of less than 19 in the performance-oriented mobility assessment. Finally, the SARC-F questionnaire (Strength, Assistance with
walking, Rising from a chair, Climbing stairs, and Falls) has a high specificity (~95%) to detect sarcopenia. The criteria are: (1) difficulty lifting and carrying 10 pounds, (2) difficulty walking across a room, (3) difficulty transferring from a chair or bed, (4) difficulty climbing a flight of 10 steps, and (5) falls in the past year. The first four criteria are scored as none = 0, some = 1, and a lot = 2. The last criterion is none = 0, 1 to 3 falls = 1, and ≥ 3 falls = 2. A score of 4 or greater is predictive of clinically important sarcopenia and is associated with adverse outcomes.
FIGURE 51-7. Clinical algorithm for the combined assessment and management of osteoporosis and sarcopenia in older persons. (Reproduced with permission from Kirk B, Zanker J, Duque G. Osteosarcopenia: epidemiology, diagnosis, and treatment-facts and numbers. J Cachexia Sarcopenia Muscle. 2020;11[3]:609–618.)
Bone Mineral Density (BMD) Assessment
BMD measurement has historically been considered the gold standard for the diagnosis of osteoporosis in the clinical setting. Various techniques could be used to measure BMD of the hip, spine, wrist, or calcaneus. The preferred method of BMD measurement is dual-energy x-ray absorptiometry (DXA).
BMD of the hip, anteroposterior spine, lateral spine, and wrist can be measured using this technology. Quantitative computerized tomography is also used to measure BMD of the spine. Specific software can adapt computerized tomography scanners for BMD measurement. The advantages
of DXA over quantitative computerized tomography include lower cost, lower radiation exposure, and better reproducibility over time. Peripheral DXA (measures wrist BMD) or ultrasonography of the calcaneus may be helpful for general osteoporosis screening, and these have the advantage of reduced cost and portability. Peripheral bone densitometry (performed at the heel, finger, or forearm) is highly predictive of hip, spine, wrist, rib, and forearm fractures for the subsequent 12 months.
Assessment of BMD should be considered for (1) postmenopausal women younger than 65 years with one or more additional risk factors (other than menopause); (2) all women older than 65 years, regardless of additional risk factors; and (3) patients with a history of minimal trauma fracture in which osteoporosis treatment is being started (as a baseline BMD assessment and to evaluate future therapeutic response). A study of more than 200,000 postmenopausal women from more than 4000 primary care practices in 34 states reported that approximately 50% of this population had low BMD previously undetected, including 7% of women with osteoporosis. An additional study found that screening postmenopausal women with BMD reduced the incidence of hip fracture by 36%. These studies suggest that efforts should be made to increase BMD measurements in postmenopausal women.
Postmenopausal women with significant kyphosis and clinical risk factors also do not require BMD testing to confirm the diagnosis of osteoporosis. Instead, both of these subsets of patients deserve treatment. In a cohort of over 8000 women participating in the Study of Osteoporotic Fractures, of the 243 that suffered a hip fracture, 54% had a total hip BMD T- score better than −2.5 at the start of the study. Similarly, in a study of 257 men aged 70 and older, although those with lower T-scores had a higher fracture rate, the majority of fractures occurred in men with T-scores better than −2.5. These studies emphasize that BMD is not the fundamental determinant of fracture risk; the microarchitecture and quality of bone are also important, which is not directly assessed with densitometry, as is the propensity to fall.
As noted above, clinical risk factors need to be assessed. Furthermore, it is imperative to recognize that age is a much more significant factor than BMD in determining fracture risk. The 10-year probability of a fracture in an 80-year-old is more than twice as great as the probability of fracture in a 50- year-old with the same BMD T-score. BMD may also be used to establish
the diagnosis and severity of osteoporosis in men and should be considered in men with low-trauma fracture, radiographic changes consistent with low bone mass, or diseases known to place an individual at risk of osteoporosis. Data relating BMD to fracture risk were initially derived from studies completed in women, but recent data suggest that similar associations may be valid in men as well. Assessment of BMD should also be strongly considered in both perimenopausal women and older men who are about to undergo long-term treatment with corticosteroids. Bone densitometry can be used to assess response to therapy. However, usually 2 years between tests are necessary to obtain accurate and valuable information. Biochemical markers of bone turnover yield much quicker details about compliance and success of therapy.
Biochemical Markers of Bone Turnover
Serum and urine biochemical markers that reflect collagen breakdown (or bone resorption) and bone formation help monitor osteoporosis treatment. Higher levels of resorption markers have been associated with increased hip fracture risk, decreased BMD, and increased bone loss in older adults in some studies. However, biochemical markers in many patients with osteoporosis will lie within the normal range. In addition, there is often a substantial overlap of marker values in women with high and low bone density or rate of bone loss. Therefore, at this time, markers are not recommended for screening or diagnosis of osteoporosis. In addition, few studies have compared the response of a particular marker (or combination of markers) and BMD to therapy in order to determine the magnitude of decrease of a biochemical marker necessary to prevent bone loss or, more importantly, fracture. Markers are most useful in assessing the response of an individual patient to treatment. Markers of bone resorption and formation decrease in response to antiresorptive therapy and increase in response to PTH, treatment with anabolic properties. The advantage of the serum versus urinary markers is that the intra-patient variability tends to be lower with serum markers, thus reducing error. Many of the osteoclast-specific markers can then be rechecked as early as 6 weeks after beginning therapy. Successful antiresorptive therapy, which also means compliance, will reduce serum levels of markers of both resorption and formation, since these processes are tightly coupled. However, changes in bone formation markers will lag several months behind changes in bone resorption markers.
MANAGEMENT
Osteoporosis develops in older adults when the normal processes of bone formation and resorption become uncoupled or unbalanced, resulting in bone loss. Osteoporosis prevention and treatment programs, then, should focus on strategies that minimize bone resorption and maximize bone formation, as well as strategies that reduce falls. Several nonpharmacologic and pharmacologic options are available to health care providers. Importantly, modifying risk factors (see Table 51-4) should be the first step in preventing osteoporotic fractures in older adults.
Nonpharmacologic Interventions
Exercise Exercise is an essential component of osteoporosis treatment and prevention programs. Data in older men and women suggest a positive association between current exercise and hip BMD. Among regular exercisers, those who reported strenuous or moderate exercise had higher BMD at the hip than those who reported mild or less-than-mild exercise. Similar associations were seen for lifelong regular exercisers and hip BMD. In a randomized study of women at least 10 years past menopause, the group receiving calcium supplementation plus exercise had less bone loss at the hip than those assigned to calcium alone. Furthermore, high-intensity strength training effectively maintains femoral neck BMD and improves muscle mass, strength, and balance in postmenopausal women compared to nonexercising controls, suggesting that resistance training would be helpful to maintain BMD and to reduce the risk of falls in older adults.
A marked decrease in physical activity or immobilization results in a decline in bone mass; accordingly, it is essential to encourage older adults to be as active as possible. However, not all types of exercises have proved to be beneficial. Progressive resistance strength training of the lower limb improved BMD at the neck of the femur in postmenopausal women. In contrast, aerobic exercises and low impact activities like brisk walking and cycling have failed to increase BMD at any site. Whole-body vibration (WBV) has been shown to increase BMD in some studies; however, some trials have also reported no effect. Evidence regarding the role of exercise in preventing fracture is limited as no randomized controlled trial (RCT) has so far evaluated the role of exercise as a single intervention to prevent fractures. However, exercise and focused physical therapy can prevent falls, which are
significant contributors to increased fracture risk. Concerning exercise, an important consideration in older people is configuring a program according to comorbidities and functional status, as adherence may vary depending upon underlying conditions, particularly osteoarthritis and/or cardiopulmonary disease.
Physical therapy is an integral part of osteoporosis treatment programs, especially after an acute vertebral compression or hip fracture. The physical therapist can provide postural exercises, alternative modalities for pain reduction, and suggest changes in body mechanics that may help prevent future falls and fractures. Gait training or balance training and muscle strengthening can help prevent falls, even for relatively frail older persons. A meta-analysis of randomized clinical trial interventions to reduce falls concluded that all types of exercises achieved similar benefits in balance, endurance, flexibility, and strength. The key message is to prescribe a program for those patients who are mobile and functional enough, in addition to cognitively capable, to participate.
Nutrition (calcium and vitamin D) Calcium and vitamin D are required for bone health at all ages. Elemental calcium, 1200 to 1500 mg/day, for postmenopausal women and men older than 65 is needed to maintain a positive calcium balance. The amount of vitamin D required is at least 800 IU/day, although evidence suggests that as much as 2000 IU of cholecalciferol per day and more are necessary to achieve serum levels (25[OH] vitamin D ≥ 75 nmol/L) that are optimally effective for fall and fracture prevention.
Overall, adequate calcium and vitamin D should be recommended for all older adults, regardless of BMD, to maximize bone health. In osteoporotic patients who require pharmacologic treatment, administration of calcium and vitamin D alone is not recommended. Indeed, supplementation with vitamin D enhances the BMD response to osteoporosis treatment. It is also worth noting that all pivotal trials of anti-osteoporotic agents (antiresorptive, anabolic, or dual-action agents) have included supplementation with calcium and vitamin D.
Despite our understanding of the role of calcium and vitamin D in bone physiology, recent analyses have raised some controversies about their role in increasing BMD and fracture prevention. Several recent meta-analyses showed that calcium intake from diet or supplements produced only a small increase in BMD. Notably, the increase was considered unlikely to decrease
fracture risk. There is also controversy regarding supplementation with a high dose of calcium, as some reports have suggested an increased risk of cardiovascular disease. However, a recent systematic review and dose- response meta-analysis of prospective cohort studies found that total calcium intake was associated with lower cardiovascular mortality in postmenopausal women. Dietary calcium was associated with all-cause mortality, and supplemental intake was not associated with the risk of all- cause, cancer, or cardiovascular mortality.
Like calcium, the role of vitamin D supplements in improving BMD and decreasing fracture risk is debated as the latest meta-analyses have failed to show any benefit in improving BMD or reducing fracture risk. This is despite earlier analyses conclusively stating the positive role of vitamin D supplements in osteoporosis management by decreasing the risk of nonvertebral fractures. This lack of apparent benefit could be due to the inclusion of a significant number of healthy people with no risk factors for osteoporosis, with normal serum levels of vitamin D, or with different doses of vitamin D supplements in RCTs. To add to the controversy surrounding its use, high doses of vitamin D by bolus administration (500,000 IU/year or 60,000 IU/monthly) have consistently been shown to increase the risk of falls and trends to increased fractures. Despite these controversies, it is accepted that vitamin D has a role in osteoporosis (and sarcopenia) management, particularly in those with vitamin D deficiency. While frank vitamin D deficiency (≤ 30 nmol/L) should be treated with a loading dose of 50,000 IU cholecalciferol or ergocalciferol, it is recommended that all others be treated with an appropriate daily dose of 1000 to 4000 IU cholecalciferol.
Nutrition (additional factors) Two prospective studies examined the effect of additional nutritional factors on bone loss and fracture risk in older adults. The Framingham Osteoporosis Study found that higher baseline magnesium, potassium, and fruit and vegetable intakes were associated with higher baseline BMD. In men, increased potassium and magnesium intakes were associated with lower bone loss at the femoral neck. Additionally, this study showed a correlation of hip fractures with higher serum levels of homocysteine. Although homocysteine levels are associated with vitamin B12, serum levels of this vitamin have not been associated with lower BMD,
suggesting that the role homocysteine plays in bone biology remains to be elucidated. However, other studies have not found a clear association between homocysteine levels and fracture. In addition, lower baseline
protein intake or percent of total energy from animal protein has been associated with more significant bone loss at the femoral neck and lumbar spine. In another prospective cohort study, the Study of Osteoporotic Fractures, BMD was not related to the ratio of animal to vegetable protein intake. Still, a higher proportion of animal to vegetable protein intake was associated with more significant femoral neck bone loss and an increased risk of hip fracture. These studies suggest that nutritional factors other than calcium and vitamin D are essential for bone health in older adults.
Prospective randomized studies are indicated to elucidate further the role of nutrition in preventing and treating osteoporosis in older adults.
PHARMACOLOGIC TREATMENT (TABLE 51-5)
TABLE 51-5 ■ PHARMACOLOGIC AGENTS FOR THE TREATMENT OF OSTEOPOROSIS
CLAS.S OA UG HAME
PATIEN TS STl:JJOIEמ
KEY SIDE EFFEcד 5J
PREC.AUTIQHS
EFFICAC'ו'
FORMUI.ATICtN
!TR EA.T'-•1:NT
מOSAGEI
E!i::Sf)l�haווAi
Al�ndmnaוc (F�uvג1ו�. Blחa�,· ,
lr1l1ibi1Loוב
of�lcט,la�t
<11:�i\יil)•
71} mc \vא:kly
Qנדt13y ו
M,:-ון<:וחd p,N,l
rnו:11 P"",US\דl
l
\\'t11ncn 1 !lנ
Rcd11ctd hip :יוnd
�rיlt'].-.nז,l fr-.iciuזז··
b>·;ipןכru.י:. 50%.
Gnt1tr.זi1וdiaדt..:...t eGFו:t
<�5נדוUnוu,
gtll'°r�}
1 105,teo-p,טross
ove;r נ-ץ� �
ן!b:;ind�na�
('8,Q11f� •
g�•wrhג)
1Sfi mg מרסמlh.ly t,;1bןct nr כ mg
ir:1lr.i\ 'ןJc)וו61f
0:ו:rtL..�stgrokf.
l11d�i.qd
(נ,'\i1.i!ii-])C)I'C_ב�j�
iזE!dטcw vwt.e
hr,il fr.ו(;iLוr� l:ry
:וp�וrזכג, S9% cwcr
Coנזוnוo11-
g.i�tוזבink�tl11,זוl
j נ-m niJ�
3years +--
�1ן!!l{.1ic גי,tז� ll)'rO[d 11ט-rmוכו:w
P:irnthyroid
lזqrו'nO!:ו,\I:•
1-elai,צ'I. ןגm t�'in (PTHזP) a.Ml.og
RlnloB,t�-
R :!יי,'K-1igaונd
iוrוtוi itoז
Ri�mrnונ..rt� (A,tסrze , J\lt.=viaי',
gtJMJ'il}
Zule,droנוie
cid (Rc.rlast ,
AdaJflll נ
'l'eripגmkt�
(זtכזln,
Ab.:ו.Eoןxו.rnttdl!!
(ryrזiltכf )
[Approvt:J i1נ s011ג� loc:�E orנsl
D,;:t:tQ.,11mab.
(/7.וJ�ו כ
A,iחb
.יו�l['i'ily 1�wltiiq;;in nc\Ybonc fo:rl'lנ"וat מn
1nhibib.
C)ו.)Lltיli וג-gof.
Q� !:�lll�
.i,וd�ur bGne n;!�'Qf]}tio.ח
35 nig wt-clc!y,
75-mgon
'2 Ci\]ח$CC'l.!ti V,C
d,ו.ys nו1נ1נtllly.
o 150י mg
i·nortthlyיגr;גJ.ly
� mg int Vf! Lד,otן i1ו.f11sio11 y,early י< 3
=
ii!j<ץ1. i זנ11 IJ � xi וזו of
14 monוlר:!1
ו
80 nןcg dגil>•
�ובbcLit ant::QLE:i iiי\)@iו:'lו�fur a m,u,xi,1ו11,1, or
2-1 n1ontlנ�
6[1n111
6-וnoתtlוly
tt ,CLL,al\�'יכU'9
i��l!Oנi
M�נ:;lnd "יoוneוו 1, ltlו u:.1'.\1-,P �!�
l
C.i,ךrtlru�r�nג�. irו.dL \1 ostoo1יoros13
Posוmeoop�us,a.l
\vו:JfDו::Q ,י.1th
Cו�tMPoM�;�
Mm 1�·illג low
ו
l)Orlc 1.ןן�:1,�Bnוi p,.;,צtnl!;iן \1 1 Wם&ned
ti('Ost roid
iנוdu..:.cd
t!.�W�Ui
[tetlu� VNlcbr..גl
rr;i�tures by 41% w
J9י;ו/, and nc,rnrcsr- tclנttl fnנcttLr.ז.!. by 36 ' OVCf year..;, Af,proved or t • iו:ו pגtii::םts טמ gl110C1co:r�icoid ltו:tnד.py
ו.זrd1וtנ1."-11wrt�brol
{roct1.1n:· by 70%, hip rnct11res by ill�, aוrוd 11011ver lcbr�I fmctuTes by J.$%, (1�!"-3 'j�L�
1זc,d1.100<i ז�k עf
, יrbebנ:;בl fro lשt'י.$ b�· 69-% tו'ו.d li,Qtl 'l'Crt'cbro.1f,.i<:iurז::ב by 99/ו itt1er ו& nנontlכ�
Reduced r13k of
\'11rtcl:iral frnctווrc:�
by appr Jז, sו'!ti.
l�,cduoo.-l,,c,rtclJו"al fr, turc�.'1-v 63'JI., hip fוitctu.r ו;,y 40¼, ill�tl noנ1vet• tebrגl rnctיו.1r ונ� 2(1%, O\l'ef .3 }'lנ�{S
Un(!Omn1011-tyc
i11f1�mוננ tioo1
R.gו'l.י--0 j {],ו;gtו�.st ן'js).
j1ג paii.e1נ.U ,vjtfi -::ג11et1t),
:,,וyplciו] feוnornl lrlו.ctu�cי
(> ye:וn us�)
C-a,ition vr aV\נ!WaווQe
וונ וJי-»י: ;ג.t lווeנך;:יי eJ
rwk of O$-l��l'W-נ111t Pagt:t d\$םM�. prt'lli• oiis rn'Clkailon thern y, iרyp�rc.:וll!leוnl�,�eן...
,.:וז.1 lilct..t.�tM , (lr
U'lwi. \'י'iJh .i hi�WrY
(Jr p.c>S!3W c;ן.ןןccיr or
ty1ד:וplר1גm�.
Coנ11n1nו1-I � r, ffibil.S-,
;\YW'<I' .md di�in%�
t11. roftl.til וri�k fכf
0 1.e-1:וsar<::011גl8IIOWנ1
jןן ו,ן�
Rspi'd b.<mc ו� 1tf1cr
t1 �$liPtג
Uוזcommoוו-
lגypQc'יl cmia, ·iנllulitis.
skin וד�slו
Ra.�W/!J.K iתנmL1no
s.upp��' וומ( \Vi�l, in,,;ret��>d rג ik b o;J� ג�I l11fcct1ons, UN , .:t ypic:i.l funןor.u rraג;:luזe
Estrogen Replacement Therapy
Multiple studies demonstrate that postmenopausal estrogen use will prevent bone loss at the hip and spine when initiated within 10 years of menopause. However, there have been few prospective studies of estrogen replacement therapy and fracture prevention. One small study demonstrated a reduction in vertebral fractures in postmenopausal women with transdermal estradiol compared to placebo. The Women’s Health Initiative study showed a 24% reduction in all fractures and a 33% reduction in hip fractures in women taking estrogen plus progestin. However, the Women’s Health Initiative study also concluded that the overall risks of estrogen plus progestin outweighed the benefits, including those associated with reducing fractures. Few studies have evaluated the use of estrogen in women older than 70 years.
Observational data, however, from the Study of Osteoporotic Fractures support a protective effect of current estrogen use against hip fracture, even in the oldest women.
Lower doses of estrogen also effectively reduce bone resorption and bone loss in older women; the lower doses also result in fewer side effects than the usual replacement doses typically used by clinicians. 17β-estradiol
0.25 mg/day was as effective as 0.5 and 1.0 mg/day in reducing biochemical markers of bone turnover in 75-year-old women compared to placebo. The
side-effect profile of 0.25 mg/day was similar to placebo and significantly different from the two higher doses. In a longer-term study, 0.3 mg/day of conjugated equine estrogen plus 2.5 mg/day of medroxyprogesterone acetate increased bone density of the hip and spine in older women who were vitamin D replete. While a recent report from the Women’s Health Initiative study demonstrates cardiovascular benefit in women aged 50 to 59 who took estrogen, at this time, better alternatives to the treatment of osteoporosis in older patients exist. Therefore, in older patients, particularly those at least 5 to 10 years postmenopausal, estrogen is not recommended.
Bisphosphonates
Bisphosphonates decrease bone resorption by inhibiting osteoclast action and survival while promoting secondary mineralization. They are structurally similar to pyrophosphates, which bind to hydroxyapatite on the bone surface and inhibit osteoclast activity.
Alendronate The efficacy of alendronate has been well established in increasing BMD at all sites and reducing the risk of vertebral fracture in older persons. However, evidence regarding its effectiveness for nonvertebral or hip fracture is less robust. Alendronate treatment for 3 years in postmenopausal women (mean age 71) with existing vertebral fracture and low femoral neck BMD decreased the risk of new morphological and clinical vertebral fractures. For postmenopausal women (mean age 68) with no preexisting fracture, alendronate was only effective in reducing the risk of clinical fracture in those with femoral neck BMD ≤ −2.5. Additionally, a subanalysis showed a decrease in the risk of nonvertebral fractures in those with femoral neck BMD ≤ −2.5. Alendronate treatment was equally effective in those ≤ 75 and those > 75 and reduced the risk of vertebral fracture by 51% and 38%, respectively after 12 months. There was not enough power to conduct subgroup analyses to determine if alendronate prevented hip fracture in older people (aged ≥ 75). The time to therapeutic benefit for alendronate in individuals ≥ 70 was calculated to be 8 months. Therefore, limited life expectancy in older and frail people should not delay the clinical decision to commence treatment. Furthermore, in a study of older women with osteoporosis living in residential aged care, alendronate treatment for 2 years was well tolerated and increased BMD at all sites.
Risedronate Risedronate has reduced the cumulative incidence of new vertebral fracture over 3 years by 41% in postmenopausal women (mean age
69) with established osteoporosis. Risedronate also has reduced the incidence of hip fracture in postmenopausal women (mean age 74) and an older group with osteoporosis (aged ≥ 80, mean age 83). However, the risk of hip fracture in the older cohort with nonskeletal risk factors (such as propensity for falls) remained unaffected. The lack of efficacy may be explained by the fact that bone density was not measured in those > 80, so that group would have likely included women who did not have osteoporosis as measured by BMD. A pooled analysis of multiple trials found that risedronate treatment for 3 years in women > 80 reduced the risk of vertebral fracture by 44%. As the risk and prevalence of vertebral fractures increase with age, older people are more likely to benefit from risedronate, evidenced by a higher absolute risk reduction—similar to that seen with alendronate. For nonvertebral fractures, the reduction in the cumulative incidence over 3 years was not statistically different between the risedronate group and placebo in the same cohort. The lack of apparent benefit for nonvertebral fracture in older adults could be due to nonskeletal risk factors for fragility fracture in this age group (age ≥ 80); however, further analysis is required to confirm this.
Zoledronic acid In postmenopausal women (mean age 73) with osteoporosis (BMD ≤ −2.5 or radiological evidence of at least one vertebral fracture), intravenous infusion of zoledronic acid yearly for 3 years reduced the risk of vertebral fracture by 70%, nonvertebral fracture by 25%, and hip fracture by 41%. In postmenopausal women with a previous hip fracture (mean age 74), zoledronic acid reduced the risk of any new clinical fracture by 35%.
Moreover, zoledronic acid treatment had an additional benefit for all-cause mortality, with fewer deaths in the subjects receiving zoledronic acid (9.6%) than the placebo (13%). Zoledronic acid also significantly reduced the incidence of clinical vertebral, nonvertebral, or any clinical fracture in adults aged 75 or older.
Zoledronic acid was shown to be safe and well-tolerated by older adults in the trial at the once-a-year dose (5 mg IV). The most common adverse effects were postinfusion influenza-like symptoms that include fever, arthralgia, myalgia, and headache; however, these symptoms decreased with subsequent infusions. Although bisphosphonate therapy in frequent and high doses in cancer patients is associated with osteonecrosis of the jaw (ONJ), when used to treat osteoporosis in the recommended dose, ONJ is very rare (estimated incidence 0.001%–0.01%). The antifracture effectiveness of
bisphosphonates persists even after discontinuation. This, together with recommended yearly infusion doses, improves compliance and efficacy in older adults who often struggle with compliance due to pill burden.
Intravenous zoledronic acid also avoids gastrointestinal side effects commonly encountered with oral preparations of bisphosphonates, particularly frequent in older adults. Prolonged use of bisphosphonates is associated with an increased risk of atypical femoral fracture (AFF).
However, absolute numbers are small, and the relative risk is much smaller than the risk of osteoporotic hip fracture and its significant adverse consequences and the significant reductions in the risk of hip and other fractures. The risk of AFF increases with more prolonged treatment of any bisphosphonate and should be investigated with x-ray in patients complaining of groin or thigh pain, as fracture precedes pain in most cases. Further investigations with a bone scan or MRI may be required. The risk of AFF rapidly diminishes after the termination of bisphosphonate treatment.
Denosumab
Denosumab is a humanized monoclonal antibody that binds to RANKL and inhibits osteoclast activation and activity. Denosumab (60 μg) given subcutaneously every 6 months to postmenopausal women (age 60–90; mean 72) over 3 years reduced hip fracture by 40%. Denosumab also decreased the risk of new radiological vertebral fracture by 68% and nonvertebral fracture by 20%. Denosumab was very effective in individuals aged 75 or older and significantly decreased the risk of hip fractures. Very interestingly, the denosumab group experienced fewer falls compared to the placebo group. However, preliminary analyses conducted by comparing a strength- related questionnaire did not reveal any difference between the treatment and placebo groups. Prospective clinical studies are needed to investigate the effects of denosumab on muscle function. However, decreasing the risk of falls by improving muscle function would reduce the risk of fragility fracture in older adults. Denosumab was well-tolerated, and the ease of administration by subcutaneous route every 6 months makes it a preferred choice in many older adults. However, unlike bisphosphonates, denosumab is not incorporated into bones. Therefore, its effect diminishes when treatment is ceased. The antifracture effect declines to pretreatment levels 12 months after discontinuing therapy, and the risk of fracture increases markedly in those with prior vertebral fracture. The risk of atypical fractures increases
with prolonged treatment; however, the risk to benefit ratio is small and should not preclude denosumab use. Indeed, denosumab treatment in long- term care residents may be preferred over other osteoporosis treatment modalities given the route of administration, the paucity of side effects, and the certainty of compliance.
Teriparatide
Teriparatide is a synthetic analogue of PTH and promotes both bone resorption and synthesis. Intermittent doses of teriparatide (20 mcg/d subcutaneously) exert an anabolic effect on bone, whereas continuous infusion promotes catabolic action. Treatment with teriparatide in postmenopausal women (mean age 71) over 18 months increased BMD at all sites. It also decreased the risk of vertebral fracture by 65% and nonvertebral fracture by 53%. Teriparatide was as effective in individuals 75 years or older as in a younger cohort younger than 75 years. In adults 75 years or older, after a median treatment period of 19 months, teriparatide reduced new vertebral fractures by 65%; the number needed to treat was 11. The risk of hip fracture was not investigated as a primary endpoint in this trial. In a head-to-head comparison of teriparatide with risedronate in postmenopausal women (mean age 72), the teriparatide treatment group had a 56% less cumulative incidence of new vertebral fracture over 24 months.
Indications for teriparatide include a T-score worse than −3.5 or prevalent fractures in the setting of a T-score worse than −2.5. In addition, patients who continue to fracture or lose BMD after 2 years of bisphosphonate treatment are also candidates for teriparatide. The major limitations to the use of teriparatide in older adults are its significant cost and the mode of administration, since subcutaneous dosages require an appropriate cognitive status, a high degree of motivation, and a considerable level of functional independence. At present, studies suggest that teriparatide should not be combined with bisphosphonate therapy since concurrent bisphosphonate treatment appears to blunt the BMD response to teriparatide. Due to concerns regarding osteosarcoma in animal studies, teriparatide therapy should continue no longer than 2 years. It is also contraindicated in Paget disease and those at risk of osteosarcoma or unexplained alkaline phosphatase elevation. However, it is imperative to treat with an antiresorptive after discontinuation of teriparatide to maintain gains in BMD.
Abaloparatide
Abaloparatide is a synthetic analogue of PTH-related peptide (PTHrP) and, due to preferential binding with PTH receptor, differs from teriparatide with predominantly anabolic action on bones. In a phase 3 trial, abaloparatide (80 mcg/d subcutaneously) given for 18 months to postmenopausal women (mean age 69) improved total hip BMD and reduced the risk of new vertebral fractures by 86% and nonvertebral fracture by 43%. A subgroup analysis in participants 80 years or older demonstrated that abaloparatide effectively increased BMD in both the hip and the spine. However, while there were numerical reductions in vertebral and nonvertebral fracture risk, they were not significantly different from placebo. Like teriparatide, the use of abaloparatide is restricted to a maximum of 2 years, based on the results of nonclinical studies on teriparatide, where it was associated with an increased risk of cancer. In general, as anabolic agents, both abaloparatide and teriparatide should be considered as starting therapy for patients with very high fracture risk and/or history of multiple fractures.
Romosozumab
Romosozumab is a humanized monoclonal antibody to sclerostin and thereby antagonizes its inhibitory impacts on osteoblasts. Romosozumab differs from other antiosteoporosis agents as it has dual effects on bone by decreasing bone resorption and increasing bone formation. Romosozumab administered for 12 months, followed by 12 months of denosumab treatment. lowers the risk of vertebral fracture by 75%. Romosozumab was compared with alendronate in postmenopausal women (mean age 74) and, after 2 years, was associated with 48% lower risk of vertebral fracture, 19% lower risk of nonvertebral fracture, and 38% lower risk of hip fracture versus alendronate. When compared with the anabolic agent teriparatide, romosozumab treatment significantly increased spine BMD and trabecular hip BMD. Romosozumab treatment was also very effective in men with osteoporosis (mean age 72) and after 1 year markedly increased BMD at the lumbar spine and the hip.
Romosozumab, due to its anabolic effect on bone and ease of administration (210 mg monthly SC), when compared to teriparatide (20 mcg daily SC), has the potential to be the preferred agent to treat osteoporosis and therefore decrease the risk of fractures. Its use is currently restricted to a maximum of 12 months due to concerns regarding increased risk of cardiovascular events as observed in one phase 3 clinical trial and the
potential for oncogenesis due to previous known association of teriparatide with cancer in rodents. The subanalysis of the antifracture efficacy of romosozumab in old and very old adults is still awaited.
OSTEONECROSIS OF THE JAW AND ATYPICAL FEMORAL FRACTURES
Longer-term administration of antiresorptives has been associated with two major complications: ONJ and AFFs. ONJ is defined as the presence of exposed and necrotic bone in the maxillofacial region that does not heal within 8 weeks. The risk of ONJ may be increased in those undergoing invasive dental procedures, such as tooth extractions. The risk of ONJ in patients with postmenopausal osteoporosis taking oral bisphosphonates is proportional to the duration and cumulative dose of antiresorptive agents and is exceedingly low with an estimated incidence of 0.02% to 0.06%.
Therefore, in general, the minimal risk of ONJ associated with bisphosphonate use appears to be significantly outweighed by the potential benefit of fracture risk reduction and the reduction in subsequent morbidity due to fractures. Overall, a professional dental checkup should not delay osteoporosis treatment initiation in older persons, particularly those at a very high fracture risk.
AFFs are stress or insufficiency fractures located in the subtrochanteric region and diaphysis of the femur. Radiographically, AFFs are located in the lateral cortex and with a transverse short oblique configuration, and are associated with cortical thickening (periosteal stress reaction). They have been reported in patients taking bisphosphonates and patients treated with denosumab, but they also occur in patients with no exposure to these drugs. The absolute risk of AFFs in patients on bisphosphonates is very low, ranging from 3.2 to 50 cases per 100,000 person-years. However, long-term use may be associated with a higher risk (> 100 per 100,000 person-years). More importantly, however, the number of fragility fractures prevented by bisphosphonate therapy far outweighs the number of AFFs that occur. To enable early detection, intervention, and prevention of AFF, clinicians should be vigilant and obtain imaging studies when patients on an antiresorptive present with thigh pain, as this may be an early sign of an AFF.
DRUG HOLIDAYS IN OSTEOPOROSIS TREATMENT
All long-term trials with bisphosphonates and denosumab have demonstrated sustained therapeutic efficacy and a very low incidence of side effects.
Findings from pooled analyses of three long-term extension trials involving bisphosphonates reveal that patients who received 6 years or more of bisphosphonates had fracture rates of 9.3% to 11%, whereas the fracture rate for patients switched to placebo was 8.0% to 8.8%. Consequently, it is reasonable to evaluate whether continued therapy imparts additional benefit. However, it is still crucial to evaluate future fracture risk when considering patients who may benefit from a drug holiday. A drug holiday may be considered after 3 to 5 years of antiresorptive therapy in patients with moderate or low risk of fracture because stopping these medications for a short period of time poses minimal risk to the patient. In high-risk patients, careful consideration for a drug holiday’s timing and/or duration is needed, as these patients may derive benefit from treatment beyond 5 years.
OSTEOPOROSIS IN NURSING HOMES
There is increasing concern about the underdiagnosis and undertreatment of patients with osteoporosis in particular settings, such as long-term care institutions. Institutionalized patients, whether mobile or immobile, are at high risk of osteoporosis. Residents should be assessed upon admission and multifactorial prevention measures implemented. All should be treated with a combination of vitamin D (minimal dose of 800 IU/day) plus calcium (1200 mg/day). Because frail older adults may be markedly hypovitaminotic D or exhibit an unpredictable response to supplementation, the serum level of 25(OH) vitamin D should be obtained before beginning supplementation.
Between 1500 and as much as 4000 IU/day may be needed. If the 25(OH) vitamin D level is less than or equal to 50 nmol/L, patients should be started on 2000 IU/day of cholecalciferol. Individuals with levels ≤ 30 nmol/L should be started on a loading dose of 50,000 IU and then continued with a dose of 3000 to 4000 IU/day. Additionally, the presence of risk factors and/or previous fractures strongly supports the use of pharmacologic treatment with either antiresorptives or anabolic agents. Since osteosarcopenia is highly prevalent in this population, a combined diagnostic and therapeutic approach for osteoporosis and sarcopenia should be implemented. The clinician should consider the patient’s level of functionality, QoL, and life expectancy before starting pharmacologic treatment for osteoporosis in a long-term care setting. However, since hip
fractures lead to a decline in QoL and life expectancy, and fracture risk reduction can be achieved as quickly as 6 months of treatment, pharmacologic approaches are justified in institutionalized patients who are at risk. Bisphosphonates are the first-line choice, and intravenous administration is likely to achieve better adherence. Denosumab appears to offer equal protection and may enhance adherence because of its subcutaneous administration. Anabolic agents such as teriparatide, abaloparatide, and romosozumab are not first-line agents and should only be used in those with severe osteoporosis and repeated fractures in the setting of antiresorptive treatment.
FURTHER READING
Cauley JA, Giangregorio L. Physical activity and skeletal health in adults.
Lancet Diabetes Endocrinol. 2020;8(2):150–162.
Chiodini I, Merlotti D, Falchetti A, Gennari L. Treatment options for glucocorticoid-induced osteoporosis. Expert Opin Pharmacother. 2020;21(6):721–732.
Coll PP, Phu S, Hajjar SH, Kirk B, Duque G, Taxel P. The prevention of osteoporosis and sarcopenia in older adults. J Am Geriatr Soc.
2021;69(5):1388–1398.
Colón-Emeric C, Whitson HE, Berry SD, et al. AGS and NIA Bench-to Bedside Conference summary: osteoporosis and soft tissue (muscle and fat) disorders. J Am Geriatr Soc. 2020;68(1):31–38.
Cosman F, Dempster DW. Anabolic agents for postmenopausal osteoporosis: how do you choose? Curr Osteoporos Rep. 2021;19(2):189–205.
Devlin MJ, Rosen CJ. The bone-fat interface: basic and clinical implications of marrow adiposity. Lancet Diabetes Endocrinol. 2015;3(2):141–147.
Diab DL, Watts NB. Updates on osteoporosis in men. Endocrinol Metab Clin North Am. 2021;50(2):239–249.
Dominguez LJ, Farruggia M, Veronese N, Barbagallo M. Vitamin D sources, metabolism, and deficiency: available compounds and guidelines for its treatment. Metabolites. 2021;11(4):255.
Farr JN, Khosla S. Cellular senescence in bone. Bone. 2019;121:121–133.
Feehan J, Al Saedi A, Duque G. Targeting fundamental aging mechanisms to treat osteoporosis. Expert Opin Ther Targets. 2019;23(12):1031–1039.
Fink HA, MacDonald R, Forte ML, et al. Long-term drug therapy and drug discontinuations and holidays for osteoporosis fracture prevention: a systematic review. Ann Intern Med. 2019;171(1):37–50.
Jain S. Role of bone turnover markers in osteoporosis therapy. Endocrinol Metab Clin North Am. 2021;50(2):223–237.
Kanis JA, Harvey NC, Johansson H, Odén A, McCloskey EV, Leslie WD. Overview of fracture prediction tools. J Clin Densitom.
2017;20(3):444–450.
Kennedy CC, Ioannidis G, Thabane L, et al. Successful knowledge translation intervention in long-term care: final results from the vitamin D and osteoporosis study (ViDOS) pilot cluster randomized controlled trial. Trials. 2015;16:214.
Khosla S, Hofbauer LC. Osteoporosis treatment: recent developments and ongoing challenges. Lancet Diabetes Endocrinol. 2017;5(11):898–907.
Kirk B, Feehan J, Lombardi G, Duque G. Muscle, bone, and fat crosstalk: the biological role of myokines, osteokines, and adipokines. Curr Osteoporos Rep. 2020;18(4):388–400.
Liu J, Curtis EM, Cooper C, Harvey NC. State of the art in osteoporosis risk assessment and treatment. J Endocrinol Invest. 2019;42(10):1149–1164.
Troen BR. Falls: to D or not to D-that is not the (only) question! Ann Intern Med. 2021;174(2):261–262.
Zanker J, Duque G. Osteoporosis in older persons: old and new players. J Am Geriatr Soc. 2019;67(4):831–840.
Chapter
Osteoarthritis
Michele R. Obert, Ernest R. Vina, Jawad Bilal, C. Kent Kwoh
INTRODUCTION
Osteoarthritis (OA) is a highly prevalent and disabling disease and has been designated as a serious disease by the US Food and Drug Administration (FDA). Approximately 240 million people worldwide are affected by symptomatic OA, including at least 32 million in the United States. This number is expected to rise with longer life expectancies and the worsening obesity epidemic. OA is the third leading cause of years lived with a disability in the United States and is the most common cause of mobility limitation among older adults. Overall, there is a 1 in 2 lifetime risk of developing symptomatic knee OA. In addition, OA is associated with an increased risk of dying prematurely and with increased prevalence of comorbid conditions such as cardiovascular disease, diabetes mellitus, and depression. Those with hip or knee OA have a 20% excess mortality compared to those without it, partly due to limitation in their physical activity levels. Finally, OA-associated economic burden is highly significant due to direct medical costs and indirect costs (eg, lost wages, home care, hospital expenditures). It is estimated that wages lost due to OA approximate $65 billion, and medical costs can exceed $100 billion annually in the United States.
EPIDEMIOLOGY, CAUSES, AND PREDISPOSING FACTORS
Age, Sex, and Race
Age is the strongest risk factor for developing OA. The prevalence of the disease increases with increasing age. Among adults older than age 60 in North America, approximately 20% to 40% have radiographic evidence of hand OA while 30% to 40% have radiographic evidence of knee OA. Symptomatic OA is less prevalent, with 13% to 26% of all adults having symptomatic hand OA and 10% to 14% having symptomatic knee OA. At age 50, the pooled radiographic prevalence of thumb base OA is 5.8% for men and 7.3% for women; whereas at age 80, the pooled radiographic prevalence of thumb base OA for men and women are 33% and 39%, respectively.
Among adults 45 years or older, the prevalence of symptomatic hip OA is about 10%. In one study, symptomatic foot OA was present in 17% of adults older than 50 years, with disabling symptoms presenting in 9.5%.
Learning Objectives
To be aware of potential causes and risk factors for developing osteoarthritis (OA).
To learn how to diagnose OA based on patient clinical presentation, imaging tests, and other diagnostic tools.
Key Clinical Points
OA is a prevalent and disabling disease, especially among older adults.
Joint pain and self-limited morning stiffness are characteristic symptoms of the disease.
OA may be diagnosed clinically based on patient-reported symptoms and physical examination findings (eg, bony tenderness, bony enlargement, crepitus). Radiographic imaging can be used to help confirm the diagnosis.
Nonpharmacologic and pharmacologic therapies can be used to treat OA symptoms, but there are no currently available disease- modifying OA treatments.
To be familiar with nonpharmacologic, pharmacologic, and surgical treatments available for patients with OA with an understanding of comorbidities in the older adult.
5.
Joint replacement surgery and other surgical options may be considered when conservative therapies fail.
Women are more likely than men to be affected by hand OA, hip OA, and knee OA, especially after menopause. The prevalence of OA may also vary by race and ethnicity. OA of the knee is more common in African Americans than in non-Hispanic Whites or Mexican Americans. OA of the hip is more common in people of European descent compared to those of Asian or African descent. Compared to older Whites in the United States, older Chinese subjects in China have higher prevalence of knee OA but lower prevalence of hip and hand OA. Increasing evidence shows that the prevalence of arthritis-attributable activity and work limitation and severe joint pain is significantly higher among Hispanics compared to non-Hispanic Whites.
Obesity
Obesity is the strongest modifiable risk factor for the development of OA. It is estimated that the reduction of body weight from obese to normal weight would reduce knee OA incidence by 21% in men and 33% in women.
Obesity is also associated with having an increased risk of developing hand and foot OA. However, there is less consistent data supporting the relationship between hip OA and obesity.
Occupation, Sports, and Joint Trauma
OA has also been linked to certain occupations. Working as a farmer or a construction worker/laborer increases the risk of developing knee and hip OA, especially among those who are overweight. Specific occupational activities such as more than 30 minutes of squatting, more than 30 minutes of kneeling, and climbing more than 10 flights of stairs per day increase the risk of developing radiographic knee OA. Among older adults, recreational walking, jogging, and other recreational activities do not seem to increase the risk of developing knee OA. However, high-impact sports are associated with increased risk of OA in certain joints: American football (knees, feet, ankles); baseball (shoulders, elbows); soccer (knees, hips, ankles); ice
hockey (knee); and boxing (carpometacarpal). These often account for the development of OA at joints that are not usually affected by OA.
The increased risk of OA from occupational and sports activities is likely due to injury rather than participation in these activities. Hip dislocation and femoral acetabular impingement are associated with the development of hip OA. Ligamentous or meniscal damage in the knee and/or surgical meniscectomy increases the risk of knee OA. In parallel, quadriceps muscle weakness is linked to tibiofemoral and patellofemoral knee OA. Prior foot or ankle injuries have also been linked to the development of foot OA.
Malalignment and Physical Abnormalities
Joint malalignment may also increase the risk of developing OA. Varus alignment or knee extensor muscle weakness increases the risk of knee OA development and progression. This increased risk is most prominent in overweight and obese persons. Similarly, congenital hip dysplasia or cam deformity increases the risk of hip OA, especially in middle-aged (55–65 years) and younger (< 50 years) individuals. Specific physical attributes, such as leg length inequality, having a higher knee height and having an index finger shorter than the ring finger have all been associated with an increased risk of developing radiographic knee OA.
Genetics
The number of OA genetic risk loci has recently increased significantly through genome-wide association studies and now includes 90 genome-wide significant loci for OA. The effect size associated with these genes is small, however. Genes that code for structural proteins of the cartilage’s extracellular matrix seem to have a role, especially those that code for collagen type II (COL2A1). Genes that code for different interleukins (eg, IL- 1A, IL-1B, IL-1RN, IL-4R, IL-17A, IL-17F, and IL-6) may also affect genetic susceptibility to OA. Other genes that have been implicated in the development of OA include the estrogen receptor α gene, vitamin D receptor gene, frizzled related protein gene, asporin, and proteins related to cartilage and bone development and bone homeostasis. In addition, genes related to height, hip shape, bone area, and developmental hip dysplasia may also play a role.
CLASSIFICATION AND PATHOPHYSIOLOGY
Classification
OA has traditionally been classified as either primary (idiopathic) or secondary to known causes (Table 52-1). Primary OA most commonly affects the joints of the hands, knees, hips, spine, and feet. Nodal generalized OA is characterized by polyarticular finger involvement (distal interphalangeal [DIP], proximal interphalangeal [PIP], first carpometacarpal [CMC] joints) and a predisposition to OA of the knee, hip, and spine. It commonly affects middle-aged women with a strong family history of OA. Inflammation is being increasingly recognized as having a role in the pathophysiology of OA. Only a minority of OA patients have inflammatory symptoms, such as swelling and redness of the joints, however, and often these symptoms are mild and intermittent. Erosive OA is a variant that occurs when there is an additional erosive/inflammatory component, often involving the DIP and PIP joints of the hands. Other individuals may have a phenotype that indicates a larger role for mechanical factors in the pathophysiology.
Secondary OA occurs when another disease or condition causes OA. Conditions that can lead to secondary OA include trauma; inflammatory arthritis (eg, septic arthritis, rheumatoid arthritis, seronegative spondyloarthropathy, gout); metabolic/endocrine disorders (eg, hemochromatosis, hyperparathyroidism); neuropathic disorders; and anatomical abnormalities.
TABLE 52-1 ■ SECONDARY CAUSES OF OSTEOARTHRITIS
The American College of Rheumatology (ACR) has proposed specific criteria to standardize the classification of OA of the knee, hip, and hands for clinical and epidemiologic studies. These criteria are based on a combination of features such as patient age (> 50 years), clinical symptoms (ie, pain, < 30 minutes of morning stiffness), physical examination (ie, presence of crepitus, bony enlargement/tenderness), laboratory findings (ie, normal inflammatory parameters, synovial fluid consistent with OA and negative rheumatoid factor), and radiographic findings (ie, presence of osteophytes). These criteria only apply to patients with symptomatic disease, exclude patients with secondary OA, and are intended to classify and not diagnose OA.
Pathophysiology
OA can be thought of as a failed repair of joint damage due to abnormal intra- and extra-articular processes involving a combination of biomechanical, biochemical, and genetic factors. A single or a combination of insults to the joints, including biomechanical trauma, chronic inflammation, genetic and metabolic factors, oxidative damage, and chondrocyte senescence, can trigger or contribute to the cascade of events that leads to OA. In the earliest stage of OA, fibrillation of the superficial layer of the articular cartilage may be observed with loss of
glycosaminoglycan content in those areas. Eventually, the fibrillation extends to the subchondral bone, the cartilage fragments into the joint, the matrix degrades, and the cartilage is completely lost, leaving a denuded bone.
Chondrocytes in the articular cartilage react to insults in the joint, leading to changes in cellular function and matrix elements. Chondrocytes and synovial cells release matrix metalloproteinase (MMP) enzymes, such as collagenases, stromelysins, and gelatinases responsible for cartilage degradation. While these MMPs may be susceptible to MMP inhibitors, synthesis of MMPs is enhanced and the inhibitors are overwhelmed in OA. Cytokines have been implicated in the regulation of this process. Interleukin- 1 (IL-1), a catabolic cytokine, is known to suppress type 2 cartilage synthesis and to induce proteases. IL-1 level and cell sensitivity are increased in OA. Anabolic cytokines, such as insulin-like growth factor-1 and transforming growth factor β, on the other hand, are found in decreased levels in the serum and synovial fluid of OA patients. Besides cartilage degradation, subchondral bone may also increase in density, potentially reflective of healing response to microfractures that have occurred. Cyst-like bone cavities may form, in addition to synovial inflammation and hypertrophy, which is not seen in normal aging joints. Osteophytes, or bony projections that form along joint margins, are often considered a hallmark of OA.
Several other factors have also been implicated in the development of OA. Crystals, including calcium pyrophosphate dihydrate and basic calcium phosphate crystals, have been identified in the synovial fluid of OA joints.
Other than their ability to induce inflammation, their role in the pathogenesis of OA remains unclear. Protein levels of nuclear receptor erythroid 2-related factors 1 and 2 (NRF1 and NRF2), which regulate antioxidant gene expression for cellular protection, are present in lower levels in chondrocytes of persons with OA. An imbalance in antioxidant production by the chondrocyte compared to reactive oxygen species production has been found to damage lipids, protein, DNA, and normal cell signaling, contributing to the development of OA. In addition, the complement system may stimulate inflammatory mediators and enzymes that can contribute to the pathogenesis of inflammatory OA, while nitric oxide, known to activate MMPs in articular cartilage, may also mediate the osteoarthritic development process. Supported by epidemiologic studies, sex-related hormones have also been considered to play a role in the development of OA, especially in women.
PRESENTATION
Symptoms
Patients with symptomatic OA most commonly present with pain in the joint. Pain tends to worsen with increased activity or weight-bearing and to improve with rest. Early in the disease, patients may complain of pain that is sharp, intermittent, and unpredictable. These features may often lead patients to limit their activity to avoid pain. As the disease progresses, the pain becomes more constant and aching in nature. Late in the disease, pain may be noted with progressively less activity, possibly occurring even at rest and at night. Patients may also complain of morning stiffness that typically resolves within 30 minutes or less. Joint stiffness may also occur after periods of prolonged inactivity, also known as the “gelling” phenomenon. Other patients may report joint locking or joint instability. In contrast, joint stiffness lasting more than an hour, joint pain and stiffness that improve with activity, and persistent joint swelling all suggest an inflammatory type of arthritis, such as rheumatoid arthritis, instead of OA.
Knee OA patients may complain of localized pain, often in the medial and lateral joint lines, or diffuse knee pain. They may have difficulty climbing the stairs or walking for even short periods of time. With laxity in the knee joint, patients may have feelings of knee instability, which may lead to falls. Hip OA patients may complain of pain in the anterior hip or inguinal area. Less commonly, their pain may also be felt laterally (ie, in the trochanter) or referred to the knee. They may have difficulty crossing their legs or putting on a pair of shoes.
Hand OA patients may report pain involving the DIP, PIP, and first CMC joints. Gripping, pinching, holding, or lifting objects may be particularly challenging in patients with symptomatic hand OA. Foot OA can cause such disabling foot pain with ambulation that can lead to significant functional limitations and increased risk of falls.
OA most commonly affects the cervical and lumbar spine areas. Cervical spine OA typically causes neck pain but can also cause occipital headaches, upper extremity radicular pain, shoulder pain, and loss of dexterity of the hands. Rarely, large osteophytes may compromise the spinal canal, causing lower-extremity spasticity and gait disturbance. Lumbar spine OA can often cause lower back pain that may radiate into the lower extremities and worsen with bending ipsilateral to the involved joint. Lumbar facet joint osteophytes
can lead to lumbar canal stenosis with symptoms of claudication and/or pain at night; symptoms of spinal stenosis are often relieved by bending slightly forward (eg, walking uphill, upstairs or leaning forward on a shopping cart) and worsened by bending backward (eg, walking downhill or downstairs).
Other symptoms of spinal stenosis include pain, tingling, numbness, and weakness,
Physical Examination
OA patients may have a completely normal physical examination. Typically, they have joint-line tenderness, crepitus (ie, a peculiar crackling, crinkly, or grating feeling on palpation, which may also be audible) and bony enlargement. There may be joint swelling that tends to be intermittent and without palpable warmth. Synovitis can also be noted in OA, and some patients may have prominent joint effusions. With disease progression, the joint’s range of motion may decrease. Joint deformity, contractures, and/or laxity may also develop later in the disease.
In hand OA, there may be tenderness of the affected finger joints and/or tenderness of the first CMC joint. Enlargement of the first CMC joint can result in a squared appearance of the hand. Prominent bony enlargements of the DIP and PIP joints are known as Heberden and Bouchard nodes, respectively.
In knee OA, there is often crepitus on active knee motion and bony tenderness. There may also be effusion (without warmth) and a limited range of motion. The joint fluid may migrate into the semimembranosus bursa posteriorly, causing swelling along the posterior knee known as a popliteal or “Baker’s” cyst. Knee varus (“bow-legged”) or valgus (“knock-kneed”) deformities may be observed later in the disease. Medial/lateral knee joint laxity may be evident on examination and lead to joint instability.
In hip OA there may be reproducible pain along the anterior hip or the inguinal area with passive or active range of motion, particularly with extension and internal rotation. The range of motion in the hip joint may also be affected; limited internal rotation and/or extension may be an early indication of hip OA. Hip OA patients may also develop an antalgic gait in which the stance phase of the gait is abnormally shortened on the affected hip joint.
In foot OA, the first metatarsophalangeal (MTP) joint is most commonly affected, followed by the second cuneometatarsal (CMT) and talonavicular
(TN) joints. There are limited agreed upon guidelines for the clinical assessment of foot OA. For first MTP joint OA specifically, the patient may exhibit pain in that joint upon walking, limited dorsiflexion in the first MTP joint and pain on direct palpation of the joint. Other exam findings that may be present include hallux valgus, first interphalangeal joint hyperextension, and decrease in ankle joint dorsiflexion range of motion.
EVALUATION
Diagnostic Imaging
OA can usually be diagnosed clinically based on history and physical examination alone. Although conventional radiography can demonstrate the severity of damage and may be used to confirm the diagnosis with an atypical presentation, radiographs are not necessary to make a diagnosis. Before making a clinical diagnosis of OA, however, other diseases should be excluded. At the very least, OA should be distinguished from referred pain, inflammatory arthritis conditions (eg, rheumatoid arthritis, gout, pseudogout) and periarticular bursitis (eg, trochanteric bursitis of the hip, pes anserine bursitis of the knee). Radiographic findings suggestive of OA are very common in older adults, yet many of these patients have minimal to no OA- related symptoms. Radiographic features of OA include joint space narrowing, osteophytes, subchondral sclerosis, subchondral cysts, and/or altered bone contours. X-rays may also show calcification of cartilage or other structures and soft-tissue swelling.
Radiographic findings specific to knee OA patients include medial tibiofemoral and/or patellofemoral joint space narrowing, along with osteophyte formation. Lateral joint space narrowing may also be seen but is less common (Figure 52-1). Osteophytes may be seen anteriorly and medially at the distal femur and proximal tibia and posteriorly at the patella and tibia. Radiographic changes seen in hip OA include joint space narrowing (superior, axial, or medial), osteophyte (femoral or acetabular) formation, and subchondral sclerosis.
FIGURE 52-1. Knee x-ray anteroposterior view showing severe joint space narrowing within the lateral compartment, genu valgus, and joint effusion.
With hand OA, the first CMC, the DIP, and the PIP joints are most commonly involved. Joint space loss is typically nonuniform and asymmetric. Involvement of more than two metacarpophalangeal joints suggests different type of arthritis other than OA. Hand x-rays of those with erosive OA will reveal erosions and the “gull-wing” deformity (Figure 52-
2). X-rays of patients with lumbar or cervical OA will reveal intervertebral disc narrowing and osteophytes arising from the vertebral body margins (Figure 52-3). Sclerosis and cyst formation may also be seen. Neural foraminal narrowing may result from osteophyte formation, but this is best seen using a computed tomographic (CT) scan.
FIGURE 52-2. Hand x-ray showing joint space narrowing of the index, middle, ring, and small finger DIP joints and middle finger PIP joint. There are possible central erosions of the index
and ring finger DIP joints, suggestive of erosive osteoarthropathy.
FIGURE 52-3. Lumbar spine x-ray showing extensive osteophyte formation.
With foot OA, a foot-specific atlas that grades osteophytes and joint space narrowing can be used to classify radiographic OA in the commonly affected areas, although this requires both dorsoplantar and lateral views to be taken while standing.
It is important to remember that conventional radiographs have limited resolution and cannot detect early OA. They also only provide images of bony structure and are two-dimensional projections of three-dimensional joints, meaning multiple views are necessary to visualize the joint and detect possible OA involvement properly.
As a research tool, MRI has allowed the detection of preradiographic OA. It offers greater resolution than an x-ray, provides depiction in three dimensions, and allows visualization of bone and soft tissue structures. MRI allows visualization of subchondral cyst-like lesions, subchondral bone attrition, joint effusion, synovitis and meniscal damage. The integrity of ligaments, periarticular cysts and bursae, and osteophytes may also be assessed through MRI. It is important to remember that although research with MRI has contributed to understanding the relevance of these structures in explaining pain and structural progression in OA, MRI is not necessary to diagnose OA.
Laboratory Studies
Laboratory tests are unhelpful in diagnosing OA but are helpful in excluding other causes of arthritis such as rheumatoid arthritis, gout, or infectious arthritis. Inflammatory markers, such as erythrocyte sedimentation rate and C-reactive protein level, and immunologic tests, such as antinuclear antibodies and rheumatoid factor, should not be routinely ordered unless there are signs or symptoms suggestive of inflammatory arthritis or autoimmune disease. Uric acid level may be ordered if gout is suspected.
Performing an arthrocentesis may be valuable in evaluating patients with presumptive OA. Synovial fluid findings in OA often show a white blood cell count of less than 2000 cells/mm3. Counts that are greater than 2000 cells/mm3 suggest an inflammatory or infectious arthritis etiology. Under polarized microscopy, there should be no visible crystals. The presence of gout or pseudogout crystals suggests crystalline arthritis as the underlying cause of joint pain.
A variety of OA-related biomarkers have been identified and validated for several OA outcomes. They are products of cartilage and bone turnover
(eg, urine/serum carboxy-telopeptide of type II collagen, serum hyaluronan). However, their utility in diagnosing OA, determining the risk of disease progression, and assessing the response to OA therapies is unclear and currently under investigation.
MANAGEMENT
The ACR/Arthritis Foundation (AF) and the Osteoarthritis Research Society International (OARSI) have published updated recommendations for management of OA. The main goals of management are to minimize OA- related pain, improve physical functioning and optimize the quality of life of patients with OA. There are nonpharmacologic, pharmacologic, and surgical treatment options for the management of OA (Figures 52-4 and 52-5). OA management should be tailored to the individual patient, and optimal management likely includes a combination of these treatment modalities. For example, evidence suggests that patient education, exercise programs, weight reduction, and wedge insoles all offer additional benefit in combination with an analgesic or nonsteroidal anti-inflammatory drug (NSAID).
FIGURE 52-4. Nonpharmacologic and pharmacologic treatment options that are recommended for the management of OA. Recommended therapies for the management of osteoarthritis (OA). Strongly and conditionally recommended approaches to management of hand, knee, and/or hip OA are shown. No hierarchy within categories is implied in the figure, with the recognition that the various options may be used (and reused) at various times during the course of a particular patient’s disease. * = Exercise for knee and hip OA could include walking, strengthening, neuromuscular training, and aquatic exercise, with no hierarchy of one over another. Exercise is associated with better outcomes when supervised. ** = Knee brace recommendations: tibiofemoral (TF) brace for TF OA (strongly recommended), patellofemoral (PF) brace for PF OA (conditionally recommended). *** = Hand orthosis recommendations: first carpometacarpal (CMC) joint neoprene or rigid orthoses for first CMC joint OA (strongly recommended), orthoses for joints of the hand other than the first CMC joint (conditionally recommended). IA = intraarticular; NSAIDs = nonsteroidal anti-inflammatory drugs; RFA = radiofrequency ablation. (Reproduced with permission from Kolasinski SL, Neogi T, Hochberg MC, et al. 2019 American College of Rheumatology/Arthritis Foundation Guideline for the Management of Osteoarthritis of the Hand, Hip, and Knee. Arthritis Care Res (Hoboken).
2020;72[2]:149–162.)
FIGURE 52-5 Nonpharmacologic and pharmacologic treatment options that are recommended against for the management of OA. Therapies recommended against physical, psychosocial, and mind-body approaches (A) and pharmacologic approaches (B) in the management of hand, knee, and/or hip osteoarthritis. No hierarchy within categories is implied in the figure. I-A = intraarticular; IL-1 = interleukin-1; PRP = platelet-rich plasma; TENS = transcutaneous electrical nerve stimulation; TNF = tumor necrosis factor. (Reproduced with permission from Kolasinski SL, Neogi T, Hochberg MC, et al. 2019 American College of Rheumatology/Arthritis Foundation Guideline for the Management of Osteoarthritis of the Hand, Hip, and Knee.
Arthritis Care Res (Hoboken). 2020;72[2]:149–162.)
Physical, Psychosocial, and Mind-Body Approaches
Self-efficacy and self-manage ment OA patients should be taught the goals of OA treatment, the importance of changes in lifestyle, exercise, pacing of activities and weight loss, and various ways to minimize joint loading. The educational focus should be on initiating self-help and patient-driven treatments that reflect the nature and course of the disease rather than simply accepting therapies delivered by providers. Thereafter, the emphasis should be on maintaining adherence to nonpharmacologic and pharmacologic therapies. Such information may be taught through group courses, individual consultations, and regular telephone calls. Participants in these programs report reduction in pain, improvement in function, and weight loss. These programs may also increase patient self-efficacy and physical activity, and decrease the number of OA-related physician visits.
Exercise Aerobic exercise and muscle strengthening can significantly improve the physical health and symptoms of patients with OA. Regular aerobic activities that older adults can engage in include walking, bicycling,
swimming, and tai chi. Quadriceps muscle strengthening can be particularly helpful for a patient with knee OA. In general, there is no significant difference in reducing arthritis-related symptoms and disability in knee and hip OA patients between aerobic and resistance training. In addition, balance exercises may be beneficial by improving a person’s ability to control and stabilize their body position.
For patients with advanced OA, mobilization exercises (ie, stretching and flexibility training) and isometric strengthening exercises can initially be prescribed. Mobilization exercises increase length and elasticity in muscles and periarticular tissues. Mobilization of the sesamoid apparatus and strengthening of hallux plantar flexors is beneficial for first MTP joint OA. During isometric exercises, muscle length does not noticeably change and the affected joint does not move. Isometric exercises for older adults include chair leg extensions, wall sits, and hip extensions. The exercise regimen may then progress toward isotonic strengthening and aerobic exercises. During isotonic exercise, muscle tension remains unchanged but muscle length changes. Beneficial isotonic exercises include squats, wall slides, and leg presses. Finally, water-based exercise is a low-impact activity that takes the pressure off joints, muscles, and bones. It is particularly beneficial for those with severe OA and marked deconditioning.
Physical therapists are valuable for providing instructions and appropriate exercises. They are essential for outlining individualized treatment and a progressive home exercise program. Indeed, a physical therapy regimen that includes strengthening and neuromuscular training can improve symptoms in two-thirds of patients with advanced knee OA. Physical therapy evaluation may also result in provision of assistive devices, such as canes and walkers, as necessary.
In summary, exercise can be a joint-specific range of motion and/or strengthening program or a general aerobic conditioning regimen. Exercise may be supervised on land, in water or in a self-directed home-based program. Regardless of the type of exercise regimen, extra attention is needed for the older patient to enhance safety and compliance with the program, taking into account potential comorbidities. Those with knee or hip OA should also cautiously engage in moderately or severely strenuous exercises (eg, stair climbing, heavy weightlifting and running).
We ight loss All existing guidelines highly recommend weight loss in overweight and obese individuals with OA of the hip or knee for OA
management. In many patients, a structured weight management program may be necessary. An effective program focuses on developing healthy eating and physical activity habits. It should include a plan for the individual to lose weight slowly and steadily to maintain weight loss over the long run. Patients should avoid fad diets, which often result in rapid weight loss in the short term, but weight gain over the longer term. A realistic goal is weight loss of more than 5% to be achieved within a 20-week period. The program may include ongoing feedback, monitoring, and support. Weight reduction has been shown to improve knee and hip OA-related pain, stiffness and disability.
Tai chi and acupuncture Tai chi, a practice that originated in China, is referred to as “moving meditation.” Practitioners move their bodies slowly, gently, and with awareness while incorporating deep breathing, meditation, and relaxation. It may be beneficial in improving strength, balance, and self- efficacy while decreasing the risk of falls in patients with lower extremity OA.
In contrast, the efficacy of acupuncture in patients with knee, hip, or hand OA remains controversial. Clinical trials have demonstrated that it may provide a short-lived duration of analgesia, mostly for those with knee OA.
Braces and taping A tibiofemoral knee brace, and at times patellofemoral knee braces, can reduce pain and improve stability and ambulation, leading to lower risk of falling for knee OA patients with varus or valgus instability.
Separately, a brace and a neoprene sleeve offer additional beneficial effects for knee OA compared with medical treatment alone. A brace tends to be more effective than a neoprene sleeve, however. Medially directed patellar taping may also benefit patients with knee OA.
Orthose s All patients with knee, hip, or foot OA should receive advice regarding appropriate footwear. However, laterally and medially wedged insoles have not demonstrated clear efficacy or benefit. Hand orthoses may benefit patients with first digit CMC joint OA of the hand, although evidence supporting hand orthoses for OA in other joints of the hand has not yet been established. With foot OA, footwear known as a “rockersole,” where the sole of the shoe is curved, reduces the need for first MTP joint dorsiflexion and has been shown to reduce the pressure under the first MTP joint. Foot orthoses include shoe-stiffening inserts and contoured orthoses that may reduce foot pain.
Assistive devices Walking aids can reduce OA-related pain in patients with knee and hip OA. A cane can significantly reduce joint loading and pain. In addition, it can provide additional stability in a patient where joint disease affects ambulation. It should be properly fitted to approximately the level of the greater trochanter of the hip. Patients should be instructed to use the cane with the hand opposite the affected knee or hip. The final bend to the elbow when walking with the cane should be approximately 15 to 20 degrees. When disability is more severe or when OA is bilateral, a walker may be a better option. Assistive devices, such as zipper pulls, built-up handles on pencils/pens, cushioned carrying tools, and meal preparation devices, can be very beneficial for patients with hand OA. Occupational therapists are likely to be helpful in recommending lifestyle modifications and assistive devices to patients with arthritis symptoms.
Heat and cold the rapy Thermal modalities may be effective for relieving OA symptoms, but the benefits may be temporary. All patients with hand, knee, and hip OA should be instructed in the use of thermal agents. Heat can be administered by various techniques, including the application of heat packs, ultrasound, electrically delivered heat, immersion in warm water, and paraffin baths. There are also store-bought heat patches, belts, packs, and wraps. Patients should be instructed that heat therapies should be limited to 20-minute intervals to reduce the risk of burns. Cold (or cryotherapy) can be administered by application of ice packs or massage with ice. It can decrease inflammation, minimize muscle spasms, and reduce pain. Patients should also be instructed to limit the use of ice or cold packs to 20 minutes. The choice of heat or cold is often based on patient preference.
Topical Agents
Capsaicin and rubefacients Topical capsaicin creams contain a lipophilic alkaloid extracted from chili peppers, which activates and sensitizes peripheral c- nociceptors. Both 0.03% and 0.08% topical capsaicin have been shown to reduce pain in knee OA, although a dose of 0.03% topical Capsaicin is better tolerated than a dose of 0.08%. At this time, efficacy in treating hand OA has not been well demonstrated, and the risk of contaminating the eyes is high.
Capsaicin has also not demonstrated any benefit in treating hip OA, likely due to the depth of the joint under the skin. Topical capsaicin can be considered as an adjunct therapy or used instead of topical NSAIDs for foot OA as studies have shown similar improvement in pain. Rubefacients
containing salicylates (eg, trolamine salicylate, hydroxyethyl salicylate, diethylamine salicylate) may also be used as adjunctive agents, albeit supportive data is scant. Skin burning, stinging and erythema are potential side effects.
Topical NSAIDs Topical NSAIDs, such as diclofenac sodium gel, are recommended as first-line pharmacological therapy to treat hand and knee OA prior to starting oral NSAIDs, given their favorable side effect profile and efficacy of pain relief. They are also recommended as first-line pharmacologic therapy for the treatment of symptomatic foot OA. In February 2020, the FDA approved Voltaren gel (diclofenac sodium topical gel 1%) for sale over the counter. Topical NSAIDs have a high safety margin and are not associated with acute renal failure or gastrointestinal adverse events when used as instructed. Thus, they may be particularly useful in patients with cardiac, renal, or gastrointestinal comorbidities. They may be less ideal for those with a large number of affected joints, in which case systemic therapy would be preferred. Similar to capsaicin, topical NSAIDs are not recommended for treating hip OA due to the depth of the joint under the skin precluding their efficacy.
Oral Pharmacologic Therapies
Acetaminophen Acetaminophen has traditionally been used as a first-line agent in treating mild-moderate OA. However, some meta-analyses have shown that acetaminophen was no better than placebo for symptom control. Other studies showed its use was associated with minimal improvement in pain control, which may be outweighed by the side effect profile of the medication. Acetaminophen use may still play a role in the treatment of knee, hip, hand, or foot OA for short treatment durations, or in patients who have contraindications precluding the use of oral or topical NSAIDs. The FDA has reduced the maximum daily dose of acetaminophen to 3 g/day. Regular monitoring for hepatotoxicity is recommended in those with prolonged use, especially for those taking the maximum recommended daily dose.
Oral NSAIDs NSAIDs inhibit the activity of cyclooxygenase (COX)-1 and -2 enzymes, providing analgesic and anti-inflammatory effects. They are recommended as first-line oral pharmacotherapy over all other oral medications and can be very helpful for OA patients who do not respond to topical NSAIDs, have multiple joint involvement, or suffer from moderate-
to-severe levels of pain. There is no strong evidence that a particular NSAID is more effective than other NSAIDs, but the side effect profiles make some safer than others for specific patients. Some patients may prefer specific NSAIDs (eg, meloxicam, naproxen) based on the frequency of dosing for convenience.
Patients may be started on a low-cost NSAID with a short half-life, such as ibuprofen. Initially, the medication should be started on the lowest possible dose, and if the response is not satisfactory after a few weeks, then the medication dosage may be slowly increased up to the maximum recommended dose. Switching to a different NSAID is another OA treatment management option.
While efficacious, NSAIDs should be prescribed with caution, particularly in patients with comorbidities that may increase NSAID toxicity. Gastrointestinal complications such as peptic ulcers and bleeding are potential side effects. This risk increases with older age, concurrent use of other medications (eg, glucocorticoids, anticoagulants), and longer therapy duration. Using either a COX-2 selective inhibitor or a nonselective NSAID in combination with a proton-pump inhibitor reduces this risk.
Nephrotoxicity is another potential toxicity. Patients with chronic kidney disease (CKD) stage IV or V (estimated glomerular filtration rate < 30 cc/min) should avoid NSAIDs. Nonacetylated salicylates, sulindac, and nabumetone may be less nephrotoxic than other NSAIDs.
Patients with cardiovascular disease are also at increased risk for cardiovascular adverse events (eg, myocardial infarct or stroke) associated with NSAIDs, particularly with the use of diclofenac. Patients should be made aware of such risks and monitored closely during treatment. Finally, concomitant use of low-dose aspirin and nonselective NSAIDs (eg, ibuprofen) may also render aspirin less effective when used for cardioprotection and stroke prevention.
COX-2 selective inhibitors A COX-2 inhibitor (ie, celecoxib) or a COX-2 selective NSAID (eg, meloxicam) can be a better treatment option for a subset of OA patients. They can effectively relieve painful OA symptoms with significantly less risk of GI side effects as compared to nonselective NSAIDs. Prescribing these medications to patients with cardiovascular risk factors should be done with caution however, as they may increase the risk of myocardial infarct, stroke, and other related conditions.
Narcotic analgesics Tramadol is a weak μ-opioid receptor inhibitor that inhibits the reuptake of serotonin and norepinephrine. It can relieve pain in patients with hand, knee, or hip OA, but should only be used when they have contraindications to NSAIDs, inadequate response to other pharmacologic and nonpharmacologic treatment modalities, or are not a good surgical candidate. Tramadol should be trialed prior to any nontramadol opioids. Like other narcotic agents, it can also cause nausea, dizziness, somnolence, vomiting, and increased mortality risk. Long-term use may also lead to physical dependence. However, respiratory depression and constipation are considered less of a problem with tramadol.
More potent narcotic medicines, such as oxycodone, hydrocodone, or morphine sulfate, may be considered in some OA patients only after failure of all other treatments. Patients who continue to have severe OA-related pain and disability despite trying other pharmacologic and nonpharmacologic OA treatments may be good candidates. Those who are unwilling or unable to undergo joint replacement surgery due to comorbid conditions (eg, significant cardiac or pulmonary disease) may also be reasonable candidates.
Narcotic agents are poorly tolerated in older people due to increased sensitivity to certain side effects, including constipation, urinary retention, confusion, and sedation. The risk of falls may also be increased in a population already vulnerable to falls due to joint disease and other risk factors. If narcotic medicines are to be started in older people, then the lowest dose should be prescribed for the shortest possible length of time. Adjuvant therapies with nonnarcotic pain relievers should also be considered.
Nutraceuticals Glucosamine and chondroitin sulfate are naturally occurring constituents of cartilage proteoglycans. They are popular “nutritional supplements” used by many OA patients, but their use is highly controversial. Although several clinical trials showed their efficacy in improving pain and function, these trials were primarily industry-sponsored and utilized a pharmaceutical grade of glucosamine sulfate, which is not available in the United States. The best available data with the lowest risk of bias indicate that these supplements were no more effective than placebo. The ACR/AF and OARSI recommend against the use of either glucosamine or chondroitin sulfate to treat knee or hip OA.
Other nutraceuticals have been investigated for their potential benefits in the treatment of OA. A number of nutraceuticals have reported a short-term reduction of pain, but most of the current studies on nutraceuticals are small, exhibit various forms of bias, and have short follow-up times.
Othe r options The use of serotonin and norepinephrine reuptake inhibitors, such as duloxetine, seems promising in the treatment of OA. It has shown moderate efficacy in treating symptoms for knee OA when used alone or when combined with NSAIDs, and its effects are likely to be similar in the treatment of hip and hand OA. The tolerability of the medication may be a limiting factor, however.
Disease-modifying osteoarthritis drugs (DMOADs) can potentially inhibit the structural disease progression of OA and improve OA-related symptoms. At present, there are no DMOAD therapies available on the market. However, several research studies are being conducted to determine DMOAD efficacy and safety. Many studies have focused primarily on preventing hyaline cartilage loss. Because the pathogenesis of OA involves multiple tissues, more recent studies are also targeting other tissues, including the subchondral bone. Most DMOADs under investigation have an anti-catabolic effect on cartilage and may also structurally modify subchondral bone. Studies of bisphosphonates showed no efficacy in the control of OA pain or improvement in function. Similarly, tumor necrosis factor inhibitors and IL-1 receptor antagonists administered subcutaneously or intra-articularly did not prove to be efficacious for treating erosive OA and are not recommended for use. Other options under investigation include inducible nitric oxide synthase inhibitors, MMP inhibitors, aggrecanase inhibitors, doxycycline, Wnt inhibition, intra-articular injection of an anabolic growth factor, fibroblast growth factor 18, and a cathepsin K inhibitor.
Intra-Articular Therapies
Glucocorticoid agents Intra-articular glucocorticoid (eg, methylprednisolone, triamcinolone) injections can significantly relieve pain due to OA. They can be most beneficial to OA patients with one or a few joints that continue to be bothersome despite oral pharmacologic therapies. They are particularly efficacious in patients with knee or hip OA. Knee joint injection can be administered in an ambulatory care setting. Hip joint injection, though, is often done with ultrasonographic or fluoroscopic guidance. When proper
technique is used, complications from intra-articular injections such as bleeding and infection are rare. Benefits may be short-lived, however, and pain relief typically lasts for up to 2 months. More than three injections within a 6-month period are not recommended, and the long-term effects these steroids have on the cartilage are still controversial.
Viscosupple mentation Viscosupplementation, the intra-articular injection of hyaluronic acid, has been used for symptomatic knee OA. Hyaluronic acid is a large molecular weight glycosaminoglycan, which is a constituent of synovial fluid. Hyaluronic acid in OA joints is often of low molecular weight, losing its biomechanical and anti-inflammatory properties. Recent meta-analyses demonstrated no clear benefit of intra-articular hyaluronic acid injection for the treatment of OA symptoms after addressing the component of bias from studies, especially in hip OA. Although the ACR/AF and OARSI OA management guidelines do not recommend using viscosupplementation based on the lack of evidence for treatment benefit, there may be individual patients who would benefit from a trial after the failure of all other treatment options.
Platelet-rich plasma and mesenchymal stem cell the rapy Platelet-rich plasma (PRP) and mesenchymal stem cell (MSC) therapy are heavily marketed for many conditions including OA. These therapies are still experimental, have not been approved by the FDA, and should not be administered or obtained except in the case of an FDA-approved clinical trial. They may be marketed as “FDA-approved,” but only the equipment used to obtain and administer the PRP or MSC therapy has been FDA-approved.
Surgical Interventions
Surgical intervention should be considered in OA patients who continue to have significant pain and disability despite maximal use of nonpharmacologic and pharmacologic therapies. Patients must also be healthy enough to withstand surgery. Shared decision making has been shown to be beneficial for patients and surgeons with regards to expectations, satisfaction, and outcomes.
Arthroscopic debridement and joint lavage Arthroscopic debridement is the removal of loose bodies, debris, mobile fragments of articular cartilage, unstable torn menisci, and impinging osteophytes. The procedure invariably also includes joint lavage. While it is a relatively common procedure, its practice is highly
controversial. While a few uncontrolled studies have demonstrated short- term efficacy, most studies have shown that this procedure is no better than placebo in providing symptomatic relief for knee OA. Arthroscopic debridement or joint lavage alone is not recommended as treatment options in the OARSI OA management guidelines.
Osteotomy Osteotomy is a surgical procedure in which the bone is cut to shorten, lengthen, or change bone alignment. High tibial osteotomy is a potential surgical treatment for knee OA. It is appropriate for unilateral knee OA with varus malalignment. Realignment of the varus deformity would reduce stress on the medial compartment of the knee by redistributing the weight of the body from the arthritic medial compartment to the healthier lateral compartment. Although the overall failure rate at 10 years is approximately 25%, the procedure can reduce pain, improve function, and delay the need for joint replacement.
Intertrochanteric varus or valgus osteotomy has been used for hip OA treatment for nearly a century. Pelvic or femoral osteotomies have also been used to correct the biomechanics and joint congruency in young patients with hip dysplasia to prevent the development of hip OA. Evidence in the efficacy of these procedures, however, is limited.
Joint replacement Joint replacement surgery is an irreversible procedure used in those with severe OA who have failed conservative treatment modalities, have persistent pain, and have had associated loss of function secondary to their symptoms. The average age at the time of knee replacement is the mid- sixties, although this procedure is being done more and more often in younger patients. Patients who undergo surgery often attain substantial improvements in pain, quality of life, and physical functioning; however, between 20% and 30% of patients have reported not being satisfied with surgery outcomes after total joint replacement. Maximal improvements are usually observed in the first 3 to 6 months with long-term benefit plateauing after 9 to 12 months.
Quality of life indicators following joint replacement also improve approximately a year after surgery and only decline gradually over time. Certain patient characteristics such as advanced age, obesity, and comorbidities may limit improvement in patient-reported outcomes after surgery and increase rates of complications, but the presence of these characteristics should not prevent patients from being offered surgery; rather, these factors should inform the shared decision-making process. With current advances, implants typically last 15 years or more. The risk of revision is
higher in patients with diabetes, obesity, and those who received surgery before 65 years compared to those aged 65 or older.
Unicompartmental knee arthroplasty involves replacement of a part or section of the knee that is arthritic. It may be considered in patients with discrete knee pain and disease localized to the medial compartment.
Compared to total knee arthroplasty, it may also improve knee pain and function and requires a smaller surgical incision. Consequently, there is less postoperative pain, and hospital stays are shorter. The rehabilitation process also tends to be more rapid. Postsurgical complications, such as deep vein thrombosis and infection, are also fewer with unicompartmental than total knee replacement surgery. However, unicompartmental knee arthroplasty may make subsequent total knee replacement surgery more complex and has a shorter lifespan than a total knee arthroplasty joint, making rates of revision higher.
Joint fusion Joint fusion surgery, also known as arthrodesis, may be selected in patients with severe OA of the wrist, ankle, or first MTP joint. It may also be used as a salvage procedure when knee joint replacement has failed. During the procedure, two bones on each end of a joint are fused, eliminating the joint itself. While a fused joint loses flexibility, it can bear weight better and may leave the patient completely pain-free.
Hand surgeries Surgery is considered when nonsurgical treatment options have not significantly helped patients with OA of the thumb base or OA of the interphalangeal joints. In the case of the first CMC joint OA, trapeziectomy is the procedure of choice. More complicated surgical techniques have not proven to be more effective and are linked to increased rates of adverse events such as pain, instability, nerve dysfunction, chronic regional pain syndrome, and infections. The procedure of choice for OA in the PIP joint is arthroplasty with a silicone implant, except for the second PIP joint, for which arthrodesis is the preferred surgical method. Similarly, arthrodesis is the best approach for the DIP joints.
PREVENTION
There are currently no therapies known to prevent the progression of joint damage due to OA. However, current research efforts are trying to identify preclinical biochemical and imaging biomarkers that will provide opportunities to diagnose and treat OA earlier in the disease course. Hence,
we may eventually be able to prevent the development and further progression of the disease.
There are also known potential theoretical targets for primary and secondary prevention of OA. A person’s weight is the largest identified modifiable risk factor for the development and progression of OA. The risk of knee OA has been shown to increase along with an increase in BMI. Therefore, weight loss for those who fall into the obese and overweight categories is important in primary prevention. Patients need to be educated on the benefits of weight loss and the need to achieve a normal body weight.
The Physical Activity Guidelines for Americans recommend at least 150 minutes of moderate-intensity aerobic physical activity or 75 minutes of vigorous-intensity physical activity per week; walking is a great way to accomplish this. Repetitive use due to a job, hobby, or sport and joint-related injuries have also been linked to OA. Therefore, avoidance of repetitive movements and education on joint protection techniques (eg, using good body mechanics) to reduce joint injury while working or playing sports are important. Finally, joint malalignment is one of the strongest predictors for progressive OA. Nonsurgical and surgical strategies may be considered to correct anatomical abnormalities.
CONCLUSIONS
OA is a highly prevalent disease among older adults worldwide. Older age, obesity, structural abnormalities, and previous injury are among the known risk factors for disease development. Patients often present with chronic pain, stiffness, and functional disabilities. Diagnosis is based on history and physical examination, and can be supported by characteristic radiographic findings. Management of OA should be tailored to the individual patient.
Treatment may include a combination of nonpharmacologic, pharmacologic, and surgical approaches, with different approaches being used at appropriate timepoints throughout the natural history of the disease.
FURTHER READING
Bannuru RR, Osani MC, Vaysbrot EE, et al. OARSI guidelines for the non- surgical management of knee, hip, and polyarticular osteoarthritis.
Osteoarthritis Cartilage. 2019;27(11):1578–1589.
Ferguson RJ, Palmer AJ, Taylor A, Porter ML, Malchau H, Glyn-Jones S. Hip replacement. Lancet. 2018;392(10158):1662–1671.
Fernandes L, Hagen KB, Bijlsma JW, et al.; European League Against Rheumatism (EULAR). EULAR recommendations for the non- pharmacological core management of hip and knee osteoarthritis. Ann Rheum Dis. 2013;72(7):1125–1135.
Hunter DJ, Bierma-Zeinstra S. Osteoarthritis. Lancet. 2019;393:1745–1759. doi:10.1016/S0140-6736(19)30417-9.
Katz JN, Arant KR, Loeser RF. Diagnosis and treatment of hip and knee osteoarthritis. JAMA. 2021;325(6):568. doi:10.1001/jama.2020.22171.
Kloppenburg M, Kroon FPB, Blanco FJ, et al. 2018 update of the EULAR recommendations for the management of hand osteoarthritis. Ann Rheum Dis. 2019;78:16–24. doi:10.1136/annrheumdis-2018-213826.
Kolasinski SL, Neogi T, Hochberg M, et al. 2019 American College of Rheumatology/Arthritis Foundation Guideline for the Management of Osteoarthritis of the Hand, Hip, and Knee. Arthritis Rheumatol.
2020;72(2):220–233.
Liu X, Machado GC, Eyles JP, Ravi V, Hunter DJ. Dietary supplements for treating osteoarthritis: a systematic review and meta-analysis. Br J Sports Med. 2018;52(3):167–175.
Loeser RF, Collins JA, Diekman BO. Ageing and the pathogenesis of osteoarthritis. Nat Rev Rheumatol. 2016;12(7):412–420.
Paterson KL, Gates L. Clinical assessment and management of foot and ankle osteoarthritis: a review of current evidence and focus on pharmacological treatment. Drugs Aging. 2019;36(3):203–211.
Price AJ, Alvand A, Troelsen A, et al. Knee replacement. Lancet.
2018;392(10158):1672–1682.
Reynard LN, Barter MJ. Osteoarthritis year in review 2019: genetics, genomics and epigenetics. Osteoarthritis Cartilage. 2020;28(3):275– 284.
Roddy E, Menz HB. Foot osteoarthritis: latest evidence and developments.
Ther Adv Musculoskelet Dis. 2018;10(4):91–103.
van der Oest M, Duraku L, Andinopoulou E, et al. The prevalence of radiographic thumb base osteoarthritis: a meta-analysis. Osteoarthritis Cartilage. 2021;29(6):785–792.
https://doi.org/10.1016/j.joca.2021.03.004.
Vina ER, Kwoh CK. Epidemiology of osteoarthritis: literature update. Curr Opin Rheumatol. 2018;30(2):160–167.
Chapter
Hip Fractures
Ellen F. Binder, Simon Mears
INTRODUCTION
Hip fracture is a major public health problem with significant consequences for older patients, their families, and the health care system. In 2010, there were approximately 260,000 adults aged 65 and older hospitalized for a hip fracture in the United States and this number is expected to increase to 289,000 by 2030 due to the aging of the population and people living longer. Recent worldwide estimates are in the order of 1.7 million hip fractures annually, and are expected to surpass 6 million by the middle of this century. As seen in Figure 53-1, hip fracture incidence increases exponentially in both men and women with advancing age. The average age of a patient with hip fracture is 82 years. Among those who reach age 85, approximately 19% of women and 12% of men will experience a hip fracture and, of those who reach 90 years, 30% of women and 20% of men will sustain a hip fracture. Although the majority of hip fractures occur in older White women, 25% to 30% of hip fractures occur in men and, in the United States, 8% occur in non- Whites. Prominent risk factors for hip fracture are osteoporosis and propensity to fall. Underlying these essential conditions for having a hip fracture are the reduced bone strength and quality that are characteristic of osteoporosis and the multiplicity of medical, psychosocial, and environmental factors that lead to falls.
FIGURE 53-1. Age-specific incidence rates of hip fracture (per 1000 person-years): the Framingham study. (Reproduced with permission from Samelson EJ, Zhang Y, Kiel DP, et al. Effect of birth cohort on risk of hip fracture: age-specific incidence rates in the Framingham Study. Am J Public Health. 2002;92[5]:858–862.)
The direct medical and indirect nonreimbursed costs (eg, unpaid caregiving services and lost wages of patients and caregivers) of hip fracture have been estimated to be as high as $20 billion annually in the United States. Over the past few years, measures have been taken in an attempt to reduce costs of care for hip fractures. This approach, called bundling, gives a single standardized payment that includes the total cost for all care for the patient for 90 days after surgery. Hospitals with high costs are at risk to lose money with bundled care. The largest costs are typically postacute care and readmissions. The switch to bundled care forces hospitals to work to reduce readmissions and complications when possible, and attempt to send patients home rather than to subacute care. Bundling also promotes interaction between surgeons, geriatricians, and postacute care centers to try to focus rehabilitation goals and reduce length of stay in the postacute center.
Learning Objectives
To be able to describe the surgical and medical issues commonly experienced by older hip fracture patients.
To be able to describe the physiological and functional changes and psychosocial issues commonly experienced by older hip fracture patients during the year after the fracture event.
To be able to understand the impact of bundled care on hip fracture management.
To understand the role of the geriatrician in providing care to older hip fracture patients during the perioperative and recovery periods.
Among those who have experienced a hip fracture, approximately 18% of women and 36% of men are expected to die within the first year of their fracture. The highest mortality rates occur within the first few months after a fracture among those who are in the poorest health. In addition, comparison of survival rates of female hip fracture patients to similarly impaired women without fractures indicates that the fracture itself is responsible for nine extra deaths per 100 patients during the first 4 years following the fracture.
Epidemiological data from studies of women indicate that even in those with the lowest number of medical comorbidities and best functioning at the time of fracture, the mortality attributable to hip fracture continues to increase well beyond the first year post fracture. Causes of death in women and men are similar and approximately four times greater than their nonfracture counterparts for heart disease, three times greater for cerebrovascular disease, and three times greater for chronic obstructive pulmonary disease.
Interestingly, one study showed that men are far more likely than their nonfracture counterparts to die from infectious causes such as septicemia and pneumonia, in the first 2 years following hip fracture.
The intent of this chapter is to provide information about the medical and psychosocial status of the older patient who presents with a hip fracture and to discuss strategies for care and the role of the geriatrician in providing care
Key Clinical Points
Hip fracture in older adults is associated with significant morbidity and mortality.
Complications of hip fracture can be prevented or mitigated, through careful perioperative screening and intervention strategies initiated during the acute hospitalization, during a period of formal rehabilitation services, and over the course of the year after the fracture event.
Geriatricians can play a critical role in coordinating care for hip fracture patients, initiating appropriate state-of-the-art interventions, and optimizing communication between the patient, family members, and the health care team.
during the acute hospital stay and the subsequent year or more of follow-up care.
WHAT TO EXPECT WHEN SEEING PATIENTS IN HOSPITAL
Medical Presentation and Fracture Characteristics
Classically, a patient with a hip fracture presents with a painful, shortened, and externally rotated lower extremity after a fall and landing on the affected hip. Most patients are unable to weight bear on the extremity. A small percentage of fractures are nondisplaced or “hairline” fractures and the patient may be able to bear weight and ambulate with pain. Nondisplaced fracture may progress to displaced fractures if not recognized. Patients having pain with gentle rolling of the lower extremity and a history of trauma should be evaluated for fracture. If plain radiographs are negative, magnetic resonance imaging (MRI) should be performed to look for bone marrow edema and fracture. MRI is more sensitive than computed tomography. If a nondisplaced fracture is found, treatment is either with non–weight bearing or in situ fixation to prevent propagation.
Approximately half of all hip fractures occur in the area of the femoral neck (or “intracapsular fractures”), and the other half occur in the area between the greater and lesser trochanters (“intertrochanteric fractures” or “extracapsular fractures”). Less common are fractures that occur within 5 cm below the lesser trochanter; these are called “subtrochanteric fractures.” Patients with intertrochanteric fractures tend to be older than those with femoral neck fractures and are more likely to have multiple medical comorbidities. A schematic of the hip anatomy is shown in Figure 53-2, which indicates the anatomy of the vascular supply to the hip region. This schematic is useful to understand the various surgical approaches to hip fracture care and the potential for blood loss at each site. As demonstrated in the schematic, the regions of the femoral neck and the femoral head derive their main blood supply from fine retinacular arteries that stem from the medial circumflex femoral artery. These delicate vessels are closely apposed to the femoral neck in this region. A fracture in the femoral neck is more likely to disrupt the vascular supply to the femoral neck and head, which can result in longer-term nonunion and osteonecrosis. Thus, fractures in the femoral neck region, particularly displaced fractures, are usually managed
with joint replacement or arthroplasty rather than internal fixation. Arthroplasty may be partial or total replacement. Total hip replacement is thought to give better long-term results and less pain for patients who are active. In contrast, the blood supply to the intertrochanteric area is plentiful and redundant, such that nonunion and osteonecrosis are less common in this region, and fractures can heal well after internal fixation procedures. Internal fixation is most commonly performed with a hip plate and screw or with an intramedullary hip screw. All of these procedures allow the patient to bear weight as tolerated after hip fracture repair.
FIGURE 53-2. A schematic diagram of the anatomy of the hip.
Impact of Age-Associated Physiological Changes and Comorbidities The cumulative effect of environmental exposures, lifestyle, and genetic factors results in a remarkably heterogeneous geriatric population. Aging impacts most organ systems, although the degree to which organs are impaired from aging varies from individual to individual. In addition, several chronic medical conditions may also exist in a given individual.
Medical care of the patient with hip fracture must, therefore, be adapted to the individual patient’s needs. This requirement for a tailored approach to care provides opportunities for challenges and satisfaction from the practice of geriatric medicine.
The changes that occur with aging physiology result in decreased resilience to stress, or the so-called “homeostenosis” in most organs. For
example, older individuals may have lung volumes and decreased mucociliary clearance of their lungs, resulting in an increased propensity for postoperative atelectasis and pneumonia. Because of decreased physiological reserve, older individuals are at particular risk for a wider range of iatrogenic complications and do not recover as well when complications and adverse events occur. Older individuals are more susceptible to delirium during the pre- and postoperative period. This complication not only increases the patient’s risk of dying in the subsequent year, but also can impair an individual’s ability to participate in rehabilitation. Many of these complications are predictable and the geriatrician plays a critical role in team coordination and preventing complications. Hip fracture brings all of these known and unknown challenges together with the psychological insult of an acute fracture and need for surgical repair. The geriatrician must work with the surgical team to rapidly get the patient to surgery and progress the patient thought rehabilitation to hopefully bring them back to their baseline function.
Cognitive Status
Dementia is a prominent risk factor for falls and fractures and, not surprisingly, a significant number of patients with hip fracture have underlying cognitive impairment. Delirium is a common occurrence both at time of presentation to the emergency department and postoperatively, occurring in up to 60% of patients after hip fracture repair. Underlying dementia is a major risk factor for the development of delirium. The presence of delirium and cognitive impairment portend a worse functional recovery for patients with hip fracture. Identification of underlying dementia and risk factors for delirium is important for both prognostication, and so that modifiable risk factors can be avoided or minimized. Given the high risk of delirium in this patient population, it is particularly important to prepare the patient and family emotionally for this potential complication as it can be quite frightening to unprepared family members.
Risk factors for delirium in hospitalized patients have been well described. Some of the most consistently reported risk factors include advanced age, chronic cognitive impairment or dementia, sensory impairment, male sex, presence of comorbid psychiatric disease, polypharmacy and use of psychoactive and narcotic medications, infection, use of restraints, sleep deprivation, and undertreatment of pain. Among hip
fracture patients, admission from an institutional setting and congestive heart failure (CHF) are important risk factors for delirium. Multi-component and nonpharmacologic interventions, preferably implemented by an interdisciplinary team, can prevent delirium and are cost-effective. Studies of proactive geriatric consultation with targeted recommendations, such as oxygen delivery, fluid balance, analgesia, elimination of unnecessary medications, regulation of bowel and bladder function, nutritional intake, mobilization, prevention of postoperative complications, assessment of environmental stimuli, and treatment of agitated delirium, have shown reductions in the incidence of delirium overall by a third, and in one study, a reduction in the incidence of severe delirium by half.
Nutritional Status and Physiologic Changes after Hip Fracture
Due to a number of factors, including pain and medications, reduced appetite is common during the acute hospital and rehabilitation period, and a significant number of patients will have difficulty meeting their caloric needs. Many hip fracture patients are undernourished prior to the fracture and therefore at high risk for malnutrition during their recovery.
Changes in body composition also are notable after a hip fracture. The average woman who fractures a hip can expect to lose 3% to 6% of her muscle mass within 2 months of the hip fracture and to have an increase in fat mass of 3% to 4% during the post fracture year. Losses of bone mineral density also are profound in this already osteoporotic group of older women. Older women lose nearly 3% of their total hip bone mineral density and more than 4.5% of their bone mineral density in the contralateral femoral neck during the year following a hip fracture. This is 12.5 times more than the loss of 0.5% that would have been expected in a group of women of the same age, comorbid disease status, and starting level of bone mineral density (Figure 53-3). Older men with hip fracture also show accelerated loss of bone mineral density in the contralateral hip that are greater than that expected from aging. Pharmacologic interventions to prevent secondary fractures should be considered for both women and men after hip fracture.
FIGURE 53-3. Expected and observed change in total hip bone mineral density (A) and femoral neck bone mineral density (B) during the 12 months following fracture. Legend for both panels: solid line, hip fracture (observed mean and standard error); broken line, Study of Osteoporotic Fractures (SOF; expected mean based on interpolated data obtained for a 42.3- month period). (Reproduced with permission from Magaziner J, Wehren L, Hawkes WG, et al. Women with hip fracture have a greater rate of decline in bone mineral density than expected: another significant consequence of a common geriatric problem. Osteoporos Int.
2006;17[7]:971–977.)
In addition, the prevalence of vitamin D deficiency among hip fracture patients is very high, and contributes to the observed losses in bone density and muscle strength, and fall and fracture risk.
The Geriatrician’s Role in Preoperative Care
The geriatrician can serve a central role in caring for patients upon admission to the acute care hospital by evaluating the medical and perioperative issues, ensuring maximal medical stabilization prior to surgery, and providing early detection and ongoing vigilance regarding risks for and development of, postoperative medical complications. The geriatrician is well positioned to discuss with the patient and family the anticipated hospital and postacute care procedures and transitions, thereby reducing uncertainty and alleviating anxiety at this unanticipated and challenging time.
Studies indicate that a multidisciplinary approach to acute care for hip fracture can improve clinical outcomes. Several models of orthogeriatrics comanagement have been developed. Models vary from geriatric consultation or liaison services, management on a geriatric ward with orthopedic consultation, and integrated inpatient orthogeriatric units. A
systematic review and meta-analysis of 18 studies found that orthogeriatric collaboration was associated with reduced in-hospital mortality, reduced long-term mortality, and reduced length of stay. A randomized trial of orthogeriatric care provided on a geriatrics ward (vs an orthopedic ward without geriatrics consultation) did not reduce the rate of in-hospital delirium, but did improve mobility at 4 months after surgery.
The majority of patients with a hip fracture will undergo surgical repair.
The types of surgical approach and anesthesia employed are largely the purview of the surgeon and the anesthesiologist and often depend on local practice patterns and physician training. No differences have been found in mortality between general and spinal anesthetic. Preoperative regional anesthetic with a nerve block has been shown to decrease pain in hip fracture patients. The nerve block can be done by an anesthesiologist or a trained emergency room physician. The pain relief from the nerve block may make the use of a Foley catheter unnecessary. The geriatrician should help to limit the use of catheters to those who can’t roll or get on a bed pan because of pain.
To prepare the patient for hip fracture surgery, the attending physician should determine the risks of cardiovascular, pulmonary, and other complications and strive to reduce those risks. (Refer to Chapter 27, Perioperative Care: Evaluation and Management, for more details on the use of preoperative testing for risk stratification, preoperative pulmonary treatments, and the use of β-blockers and other medications to reduce the risk of perioperative adverse cardiac events in high-risk patients.) Medication reconciliation is particularly important. The patient may not have an accurate medicine list upon presentation to the emergency department, and calls to the pharmacy and/or primary care physician may be necessary to obtain correct information or clarify questions.
While surgical approaches usually offer more benefits than nonsurgical approaches, nonsurgical approaches should be considered for some selected patients. The clearest benefit of surgery over the nonsurgical approach is that surgery results in better anatomic alignment and better likelihood of ambulation. Nonsurgical approaches may be considered in patients who are unable to ambulate prior to the fracture, those with advanced dementia and contractures, those with little pain from the fracture, and those in whom surgical risks are very high or life expectancy is very short. The geriatrician can help by providing a “big picture” view of whether or not the patient will
benefit from surgical repair of the hip or whether palliative care should be considered. The determination of what may be most appropriate for a patient requires a thorough understanding of the patient’s pre-fracture functional and cognitive status, comorbidities, psychosocial factors, and the patient’s prior goals, values, and priorities. If it is determined that the patient is not a good surgical candidate, the geriatrician should be prepared to discuss the patient’s care goals and priorities and the palliative care management strategies and resources and to refer to palliative care and hospice colleagues and providers, as appropriate. Discussion of options for those not considered good candidates for surgery, including palliative and hospice care, also can be initiated and led by the geriatrician.
The Geriatrician’s Role in Postoperative Care
In the United States, a patient will usually remain in the acute hospital setting for 1 to 5 days after surgery for hip fracture. Patients should be mobilized the day of surgery if possible and certainly by postoperative day number 1. Early mobilization has been shown to be safe and effective for minimizing deconditioning and reducing the risk of complications such as delirium, constipation, pneumonia, thromboembolism, and pressure ulcer formation. If a catheter was placed, this should be removed the morning after surgery.
Adequate treatment of postoperative pain is important for maximizing participation in physical therapy and increasing mobility. The optimal pain medication regimen has not been determined. The preoperative nerve block often helps postoperative pain considerably and intravenous pain medicine is not required after surgery. Care should be taken to prescribe a customized regimen that provides sustained pain relief early in the postoperative period, minimizes sedation, and also anticipates episodic increases in pain associated with activities such as therapy sessions.
The geriatrician must monitor the patient closely for the development of postoperative complications and is in a key position to detect subtle delirium that could be a harbinger of an ominous underlying complication, such as pulmonary embolus, urinary tract infection, and pneumonia. Postoperative pulmonary complications are among the most lethal complications in this population. Deep breathing exercises with or without an incentive spirometer, early mobility, and use of physical and/or pharmacologic approaches to reduce the risk of deep venous thrombosis should all be instituted. Other important postoperative medical considerations the
geriatrician may be particularly adept at managing are reducing polypharmacy, pressure ulcer detection and management, maximizing sensory input, explaining the treatments and progression of care to patients and family, and communicating across the transitions of care to reduce errors during this vulnerable time. Atrial fibrillation and CHF are other typical postoperative complications. Worsening of CHF often happens after hospital discharge and this should be taken in account if the patient is transferred to a subacute nursing facility.
The geriatrician should also determine the patient’s risk for subsequent falls and fractures. A falls history should be obtained, including a complete description of the fall that led to the fracture as well as any previous falls, with particular attention to remediable risk factors to be addressed prior to discharge back to home (see Chapter 43, Falls). The geriatrician should discuss with the patient, family, and other providers the issue of osteoporosis treatment (see Chapter 51, Osteoporosis). To facilitate optimal secondary fracture prevention measures, many hospitals have enrolled in the International Osteoporosis Foundation “Capture the Fracture” program (www.capturethefracture.org) to implement a postfracture care coordination program. Such programs bring together and coordinate a multidisciplinary team of providers who can assess and manage osteoporosis and fall prevention.
If antiresorptive therapy is being considered for treatment of osteoporosis, the vitamin D level should be repleted prior to starting a bisphosphonate. Therefore, if not performed recently, a serum vitamin D level should be checked during the acute hospital admission.
Discharge Planning
Prior to discharge from the in-patient setting, a multidisciplinary assessment involving the geriatrician, the orthopedic surgeon, nursing, social work, physical therapy, and occupational therapy should be performed. The goal of this assessment is to decide on discharge plans and rehabilitation needs and goals. The following considerations should be taken into account: the patient’s current level of mobility, the patient’s postoperative medical and skilled nursing needs, the home environment, social support and economic resources available, self-care skills, and requirements for activities of daily living (ADL). The geriatrician plays a key role in coordinating the patient’s care and communicating with patients and families as they transition through
the health care system. A critical role for the geriatrician is to ensure continuity of care. Recent studies have shown that, with each transition to a different site of care, there is the potential for an adverse effect on the care of patients with hip fracture, and physicians must, therefore, be vigilant to ensure adequate communication and good continuity of care to minimize this risk.
While a patient’s medical diagnoses and baseline functional status are important considerations for maximal functional recovery, his or her social support network and home environment are equally important. Social support will often dictate how long a patient needs to stay in in-patient rehabilitation before safe to discharge to home or the appropriate level of care. For example, if a patient had been previously living independently in a multistory home where the bathroom and bedroom are on the second floor and the kitchen on the first floor, one of three things would need to occur prior to discharge home: (1) the patient would need to be able to negotiate a full flight of stairs independently and safely; (2) the patient would need to rearrange the home to live on the ground floor and be able to transfer and ambulate short distances on a level surface independently and safely; (3) the patient would need a responsible caregiver to assist with daily tasks; or (4) the patient would need to stay with a family member on one floor. Not all patients have access to, or resources for, assistance in their homes, but some have families or friends who are able to provide this support, which is usually unreimbursed. This can also be a major stressor when the caregiver has to take time off from work and/or travel long distances to provide assistance. In collaboration with the interdisciplinary team, the geriatrician can assist family members and caregivers to anticipate and prepare for the patient’s upcoming needs. An assessment of the home environment by an occupational and/or physical therapist provides essential information for home discharge planning.
Transitions of Care
After the acute hospital stay, the hip fracture patient may be discharged home or to a postacute setting. This is typically an acute rehabilitation center or a subacute nursing facility. Very functional patients with capable families and good social support should be discharged home if possible. Those who cannot return home upon hospital discharge will require a short-term stay in an acute inpatient rehabilitation facility or a subacute skilled nursing facility
(SNF). Transitions of care to these facilities are difficult and fraught with potential for medical errors. The accepting physician may rarely see the patient in the facility and the amount of rehabilitation services provided can vary. In the best situation, a trained geriatrician will continue care for the patient at the site of rehabilitation, and a handoff can be performed prior to transfer. A detailed consultation note immediately prior to the date of transfer, which details the comorbidities that are being managed by the geriatrician and the plan of care, can be extremely helpful to the accepting physician and rehabilitation team.
Changes in Hip Fracture Care: Bundled Care for Hip Fracture and COVID- 19
Bundled care means that one payment is given to the hospital for the entirety of care of a patient for 90 days after surgery. This rate is set by Medicare.
Hip fractures are part of several voluntary bundles which are radically changing the way care is delivered. If a hip fracture is treated with arthroplasty, the patient may be part of the arthroplasty bundle. This includes mostly elective patients with osteoarthritis. Another bundle includes care of other procedures of the entire femur bone which include hip fractures. This bundle also includes periprosthetic fracture treated with plates and patients with fixation of the mid or lower portion of the femur. Bundled care is different, because any complication requiring hospital readmissions is included within the single payment given for the bundle. The payment also includes the cost of any postdischarge care, including inpatient or outpatient services. The most expensive portion of care for the hip fracture patient is generally the postacute care and readmission and not the initial hospital stay. Participation in a bundle requires active involvement of the entire care team to minimize postsurgical complications, facilitate communication and continuity across transitions of care, shorten postacute stays without reducing quality of care, and avoid readmissions. This can involve follow-up via telemedicine, and close monitoring of patients while in a subacute facility.
Utilizing rehabilitation facilities or programs with a relationship to the acute care hospital or medical center can be helpful to try to ensure that clearly articulated rehabilitation goals are set and achieved to get the patient home earlier.
The COVID-19 virus changed hip fracture care. The treatment of patients with active COVID-19 and hip fracture depends upon the severity of the
COVID-19 infection. Those who have respiratory symptoms and fracture seem to have poor outcomes. Those who are asymptomatic should probably have early surgery as is usually performed. Postoperative visits are often curtailed if patients are COVID-19-positive and telemedicine visits must be used. The COVID-19 epidemic also made patients fearful of being in the hospital or of being in a nursing home, and this further stressed the importance of home discharge when possible.
POSTFRACTURE CHANGES IN PHYSICAL AND PSYCHOSOCIAL FUNCTION AND IMPLICATIONS FOR CARE
Physical Function
Hip fractures have significant effects on functioning and body composition. Figure 53-4 shows the proportion of patients with hip fracture who, prior to their fracture, were able to perform routine tasks involving lower extremities but were not able to perform them a year later. Twenty percent of those who could put on their own pants without assistance prior to their fracture required assistance to do so 1 year later. More striking is the proportion of patients who needed assistance from another person or required the use of equipment to walk across a small room (40%), use the toilet (66%), or climb five stairs (90%). Instrumental tasks also are affected, with 62%, 53%, and 42% who could do their own housecleaning, get to places out of walking distance independently, and shop without assistance, respectively, prior to their fracture, requiring assistance to perform these tasks a year later. Other functional consequences of hip fracture include increases in cognitive impairment and depressive symptoms, as well as increases in problems with gait and balance. Despite advances in surgical procedures, postoperative care, and long-term rehabilitation, hip fractures rank in the top 10 of all impairments worldwide in terms of causing disability and functional decline.
FIGURE 53-4. Lower extremity activities of daily living—percentage of those unimpaired before fracture with impairment at 12 months after fracture. (Data from Magaziner J, Hawkes W, Hebel JR, et al. Recovery from hip fracture in eight areas of function. J Gerontol A Biol Sci Med Sci. 2000;55[9]:M498–M507.)
Psychological Status
The diagnosis of hip fracture conveys to patients and families a significant degree of psychological stress. Most lay persons are quite aware of the risk of mortality and poor recovery of function after a hip fracture, the high rate of nursing home use and long-term placement and, for those who do return home, the high rates of dependency on others for care. Although not the case for most patients, many patients and families believe that having a hip fracture signifies the “beginning of the end.” These psychological stressors may contribute to the high rates of postoperative depression that have been described. Patients may endorse depressive symptoms during their hospital stay and for up to 6 months after fracture; those who report symptoms of depression even transiently tend to recover less well than those who do not endorse depressive symptoms at all.
FUNCTIONAL RECOVERY POSTHIP FRACTURE
Recovery in function following a hip fracture can be anticipated in most patients, although many will fail to reach their prefracture functional levels. This recovery appears to follow a sequence that may be instructive for management during the initial year after the fracture. Depression, upper
extremity ADL, and cognitive function reach their peak level of recovery by approximately 4 months postfracture, while tasks associated with mobility, such as balance and gait, reach a plateau at approximately 9 months postfracture. Interestingly, the more complex tasks that are indicative of disability, such as performing lower extremity activities and instrumental tasks, recover by approximately a year (Figure 53-5).
FIGURE 53-5. Time to recuperation following hip fracture in eight areas of function. ADL, activities of daily living. (Reproduced with permission from Magaziner J, Hawkes W, Hebel JR, et al. Recovery from hip fracture in eight areas of function. J Gerontol A Biol Sci Med Sci.
2000;55[9]:M498–M507.)
Patients who recover more slowly than anticipated should have an assessment of factors that may be contributing, such as persistent delirium, depression, polypharmacy, vitamin D and other nutritional deficiencies, and social isolation, so that a targeted multidisciplinary plan can be developed to address the issues of concern.
POSTFRACTURE REHABILITATION
Rehabilitation Services
The majority of patients with hip fracture will undergo a period of rehabilitation therapy after the fracture repair (see Chapter 55, Rehabilitation). Rehabilitation services are provided only if both of the following requirements are fulfilled: (1) the patient has a functional loss and
(2) is able to participate in rehabilitation efforts. Individuals with severe cognitive impairment or those with unresolved delirium that interferes with participation in standard rehabilitation, those with severe prefracture disability (eg, bed-bound), and those with terminal illness, may not qualify for standard rehabilitation services. Alternative approaches may be required. The geriatrician can judge potential benefits and work with the multidisciplinary rehabilitation team to recommend the most appropriate rehabilitative care.
Rehabilitation services can be provided in one or more of the following settings: an inpatient acute rehabilitation unit, a subacute rehabilitation unit in a SNF, or on an outpatient basis, either at home or in a clinic. The location of postdischarge rehabilitation is determined by a number of factors, including the patient’s functional impairments, comorbidities, ability to participate in skilled therapy sessions, social support, and personal preferences. In the United States, rehabilitation is typically limited by Medicare reimbursement and rarely goes beyond 30 days. In one study, approximately 50% of hip fracture patients were discharged to a SNF subacute unit, 20% were discharged to an acute inpatient rehabilitation unit, 15% were discharged to home, and 14% were discharged directly to a long-term care facility. The COVID-19 pandemic shifted services more toward home care, when feasible.
Inpatient acute rehabilitation units are usually located in specialized units of an acute care hospital, or in a dedicated rehabilitation hospital. Patients with complex medical or rehabilitation needs and with good endurance may receive their rehabilitation in the acute setting. The requirements for acute rehabilitation are that the individual must be able to participate in therapy for at least 3 hours/day, and at least two rehabilitation therapeutic disciplines be involved (eg, physical therapy and occupational therapy).
Patients with less complex medical or rehabilitation needs, who are not able to participate in 3 hours/day of rehabilitation but who otherwise have a functional decline that would benefit from rehabilitation and are unable to be discharged to home, can receive rehabilitation in the subacute SNF setting.
These patients typically receive interventions from a licensed professional 5
days a week, however, on a less intense basis than in the acute rehabilitation setting.
Patients with more mild functional and mobility impairments or with good support at home may receive their rehabilitation on an outpatient basis, either at home or at an outpatient facility. In order to qualify for home rehabilitation, patients must have a functional decline that would benefit from rehabilitation, have sufficient caregiver support to enable them to be cared for at home, and be homebound and therefore unable to attend rehabilitation at an outpatient clinic. Since rehabilitation will be delivered by homecare professionals in the patient’s home, heavy equipment must not be required.
For individuals with less complex rehabilitation needs, services may be delivered in an outpatient clinic. Outpatient rehabilitation may also continue after acute, subacute, and homecare services, to facilitate attainment of rehabilitation goals.
The Role of the Geriatrician in the Rehabilitation Setting
The geriatrician can also play a key role in medical management at rehabilitation settings, especially at SNFs. Regardless of the rehabilitation setting, medical care should focus on the following principles: treatment of chronic medical conditions, continued postoperative hip fracture care, prevention of complications from the associated functional decline and immobility, and prevention of future falls and fractures.
Despite ongoing therapy by rehabilitation professionals, increased risk for complications related to immobility, such as DVT, pressure ulcers, and constipation, persist at the rehabilitation setting. Thus, continued vigilance to their prevention and detection must be exercised. Based on current recommendations, DVT prophylaxis should be continued for 21 to 31 days after hip fracture surgery. Pressure ulcers are common after hip fracture surgery, with reports of an incidence rate of stage II or greater ulcers within 21 days of fracture of approximately 5%. Given the decreased mobility and frequency of use of narcotic analgesics for postoperative pain, constipation is a common occurrence and efforts should be made to prevent this with an adequate bowel regimen. Continued monitoring by the geriatrician of the patient’s weight, nutritional status, symptoms of depression and cognitive impairment, and the success of related interventions is very important for ensuring optimal recovery.
Many patients with hip fracture in the acute and subacute rehabilitation settings receive their medical care from a physician who was not their primary care provider prefracture and may not be familiar with their history or support system. Care for these patients represents a challenge for the attending physician, as these patients are typically older and with significant and complicated comorbid disease. More than 75% of patients with hip fracture have at least four comorbid medical conditions prior to their fracture. In addition to supervising the care of the hip fracture and any sequelae, the attending physician must ensure that these chronic medical conditions continue to be addressed.
What to Tell the Patient and Family to Expect With Regard to Function Because hip fracture may result in persistent functional limitations and disability, patients with hip fracture may anticipate significant changes in their lifestyle. The geriatrician needs to provide support and encouragement during the initial recovery period following hip fracture surgery when pain and disability are greatest. Patients and their families can be reassured that significant improvements in lower extremity function during the first few months postfracture are likely to occur, while at the same time cautioned that lower extremity limitations can be prolonged. Up to 50% of older adults who had previously been independent may be dependent on mobility aids, such as a cane or a walker, for ambulation at 1 year postfracture. These limitations and fear of recurrent falls may result in self-limitation of daily activities.
These declines in function may result in loss of independence in the older patient, and, as a result, the older patient with hip fracture may be facing the prospect of moving from their independent residence to a higher level of care for the first time. Individuals who have an injury as a result of a fall, such as a hip fracture, are 10 times more likely to be admitted to a SNF. This may be one of the most frightening aspects of the postfracture recovery period, and helping patients to understand that this is an expected reaction to having a hip fracture may enable them to overcome this fear and resume their activities more quickly.
Older patients and those with multiple health conditions may need to be told that their recovery may be slower than for others. Compared to younger patients, those who are older than 85 years have been found to require longer rehabilitation stays, have worse recovery of lower extremity function, are more likely to be discharged to long-term care, and are more likely to have
persistent pain, which may be accompanied by ADL limitations and reduced quality of life. Depression and cognitive limitations should also be discussed with patients and families. It is important for them to understand that depressive symptoms and cognitive limitations are common after a hip fracture. They should also be informed that these changes are not always persistent, as depressive symptoms and cognitive limitations will frequently resolve with limited intervention within 2 months, and that treatment for mild depression is often useful to avoid any lingering anxiety or depression that accompanies the hip fracture.
The Geriatrician’s Role after Formal Rehabilitation for the Hip Fracture After the patient has returned home, the focus of follow-up care should be to monitor whether the patient is making expected gains in recovery, evaluate barriers if expected gains are not achieved, and prevent subsequent falls and injury. Most hip fracture patients plateau in their functional level by 6 months after fracture event. The occurrence of complications may alter that course and many will continue to realize gains with longer amounts of time.
Geriatricians are well trained to identify barriers to recovery after a hip fracture and initiate interventions that can further enhance recovery and reduce fall risk. Optimally, the patient should be reevaluated at the time of discharge from the acute rehabilitation setting, and again upon discharge from home physical therapy, to assess the patient’s recovery level, physical impairments, and need for outpatient services. Multidisciplinary fall risk assessment (eg, home environment assessment, medication review, vision testing, postural blood pressure measurement, gait and balance testing, and targeted neurological, musculoskeletal, and cardiac examinations) followed by interventions directed at these risks can reduce fall risk and are cost- effective in older adults at risk of recurrent falls.
Continuation of an exercise program after formal rehabilitation services have ended has been shown to improve physical function during the year following surgical repair of the fracture. Exercise programs that include progressive-resistance training appear to be most effective at improving measures of physical function and quality of life. However, home-based exercise programs for hip fracture patients have also shown modest improvement in physical function. The physician should discuss with patients their options for continued outpatient therapy and/or exercise, determine their preferences, and provide appropriate referrals and prescriptions.
Although hip protectors have received attention for their potential to reduce hip fractures, meta-analyses suggest that these offer little or no benefit for patients in randomized studies. As a result, these devices should not be considered as alternatives to the other fall reduction and bone-strengthening interventions discussed above.
CONCLUSION
Care for older patients with hip fracture represents one of the greatest challenges for the geriatrician because of the multiplicity of medical and psychosocial factors involved in postfracture care. A summary of some of the key issues a geriatrician should focus on at various stages of hip fracture care appears in Table 53-1. Answering these questions requires vigilant attention preoperatively, postoperatively, in the rehabilitation setting, and after rehabilitation efforts have terminated. Although a hip fracture event has significant consequences for patients and their families, through this attention and communication with patients, families, and the other clinicians involved in the care of patients as they transition through the various sites of care, it is the goal of the geriatrician to minimize long-term adverse effects and to ensure that the overall well-being of the patient is maximized.
TABLE 53-1 ■ FOCUS OF THE GERIATRICIAN AT VARIOUS STAGES OF CARE FOR PATIENTS WITH HIP FRACTURE
FURTHER READING
Dyer SM, Crotty M, Fairhall N, et al. A critical review of the long-term disability outcomes following hip fracture. BMC Geriatr.
2016;16(1):158.
Falaschi P, March DR (eds). Orthogeriatrics: The Management of Older Patients With Fragility Fractures [Internet]. Cham (CH): Springer; 2021.
Grigoryan KV, Javedan H, Rudolph JL. Orthogeriatric care models and outcomes in hip fracture patients: a systematic review and meta-analysis. J Orthop Trauma. 2014;28:e49–e55.
HEALTH Investigators, Bhandari M, Einhorn TA, et al. Total hip arthroplasty or hemiarthroplasty for hip fracture. N Engl J Med. 2019;381(23):2199– 2208.
Malik AT, Khan SN, Ly TV, Phieffer L, Quatman CE. The “hip fracture bundle”—experiences, challenges and opportunities. Geriatr Orthop Surg Rehabil. 2020;11:2151459320910846.
Perracini MR, Kristensen MT, Cunningham C, Sherrington C. Physiotherapy following fragility fractures. Injury. 2018;49(8):1413–1417.
Reyes BJ, Mendelson DA, Mujahi N, et al. Postacute management of older adults suffering an osteoporotic hip fracture: a consensus statement from the International Geriatric Fracture Society. Geriatr Orthop Surg Rehabil. 2020;11:2151459320935100.
Sieber FE, Neufeld KJ, Gottschalk A, et al. Effect of depth of sedation in older patients undergoing hip fracture repair on postoperative delirium: the STRIDE randomized clinical trial. JAMA Surg. 2018;153(11):987– 995.
Smith T, Pelpola K, Ball M, Ong A, Myint PK. Pre-operative indicators for mortality following hip fracture surgery: a systematic review and meta- analysis. Age Ageing. 2014;43:464–471.
Chapter
Therapeutic Exercise
Kerry L. Hildreth, Kathleen M. Gavin, Christine M. Swanson, Sarah J. Wherry, Kerrie L. Moreau
AGING AND BENEFITS OF EXERCISE
A common belief among the lay public, as well as among many health care professionals, is that much of the disease and functional decline that accompanies aging is inevitable, a result of the “aging process” itself.
However, much of the physical decline and reduced physiologic reserve attributed to aging is, in fact, caused by complex interactions of disuse, environmental and lifestyle factors, disease, and genetics.
Physical activity level and fitness are associated with a greater average lifespan and inversely related to the risk of mortality. Among adults older than 60 years, engaging in at least 150 minutes per week of moderate to vigorous physical activity is associated with a 28% reduction in all-cause mortality. Even a modest dose of moderate to vigorous physical activity confers a mortality benefit compared to no activity. Higher cardiorespiratory fitness is associated with lower mortality, with a striking 80% reduction in mortality risk between elite performers (those > 2 standard deviations above the mean for age and sex) and the lowest performers.
With respect to resistance exercise, older adults performing guideline- concordant strength training had 46% lower odds of all-cause mortality than those who did not. A recent meta-analysis of more than 1400 studies reported that compared to no exercise, any resistance training was associated with 21% reduction in all-cause mortality. Combining resistance and aerobic exercise appears to confer even greater benefit than either alone. In the above meta-analysis, resistance exercise plus aerobic exercise was associated with a 40% reduction in all-cause mortality. Among older women, those who
participated in both strength training and greater than or equal to 150 minutes per week of aerobic activity had a 46% lower risk of all-cause mortality.
An inverse dose–response relationship has also been noted between physical activity and the risk of developing many chronic diseases, as discussed later in this chapter. Despite the overwhelming benefits of physical activity, only 35% of persons older than 60 years meet current recommendations for physical activity (these recommendations are included in Table 54-3, discussed later in this chapter); this percentage drops significantly in older age groups.
Learning Objectives
Describe the changes in aerobic exercise capacity and in skeletal muscle strength and power that occur with aging.
Describe the effects of aerobic and resistance exercise training on aerobic exercise capacity, skeletal muscle strength and power, and physical function in older adults.
Key Clinical Points
The physiologic responses to aerobic and resistance training on aerobic exercise capacity and muscle strength and power are preserved in older adults.
Because the relation between physiologic impairment and functional limitations is nonlinear, older adults with little or no physiologic reserve may realize large functional improvements with exercise.
Older adults can safely engage in even high-intensity exercise; exercise and physical activity recommendations should be specific and tailored to the individual to enhance long-term adherence.
Describe the role of exercise in the prevention and treatment of common geriatric disorders.
It is increasingly clear that many of the health benefits of exercise can be accrued simply through a more active (nonsedentary) lifestyle. This concept may be especially helpful in trying to encourage older individuals who feel they are unable or unwilling to engage in formal exercise training.
Sedentary behavior includes activities that require an energy expenditure of more than or equal to 1.5 times resting energy expenditure, and encompasses activities such as sitting, watching TV, using a computer, reclining, or lying down during waking hours. Objective measurements of sedentary time in the National Health and Nutrition Examination Survey (NHANES) indicate that adults aged 60 or older spend more time engaged in sedentary behaviors (8–9 hours per day, or 60% of their waking time) than any other segment of the population. In older adults, sedentary behavior is associated with an increased risk of sarcopenia and functional limitations, and inversely associated with physical function. Prolonged periods of sedentary behavior appear particularly deleterious, particularly for metabolic health. In older adults, the number of breaks in sedentary time is associated with higher levels of physical function, independent of moderate to vigorous physical activity. Changing sedentary behavior may have an immediate impact on the health of older adults. In one study, older adults who reduced their sedentary time over a 2-year period had a lower risk of all- cause mortality compared to those who either increased or did not change their sedentary time.
AGING AND EXERCISE
Aerobic Exercise Capacity
One of the key physiologic changes that occurs with aging and contributes to the decline in physical function is a decline in aerobic exercise capacity, best measured by the amount of oxygen consumed at maximal or peak exercise (maximal or peak aerobic power; VO2max or VO2peak). Longitudinal data indicate that the rate of decline in VO2peak accelerates markedly with each successive decade, reaching more than 20% per decade from the 1970s onward (Figure 54-1).
FIGURE 54-1. Per-decade percent cross-sectional and longitudinal changes in peak VO2 by gender and age decade. (Reproduced with permission from Fleg JL, Morrell CH, Bos AG, et al. Accelerated longitudinal decline of aerobic capacity in healthy older adults. Circulation.
2005;112[5]:674–682.)
Age-associated declines in VO2max can be attributed to alterations in both central and peripheral determinants of exercise capacity. Reductions in maximal heart rate and in the maximal ability of muscle to extract oxygen from the blood, driven in large part by the age-associated loss of muscle mass, appear to contribute to declines in VO2 max in older adults (also see
Chapter 73, The Aging Cardiovascular System). In all age groups, endurance-trained adults have a higher VO2 max than their age-matched sedentary peers, suggesting that participation in habitual aerobic exercise
may attenuate the age-related decline in VO2 max.
Despite declines in VO2max with aging, the response to an aerobic exercise training program in previously sedentary, healthy older adults is comparable to that observed in younger subjects, with improvements mediated by both central and peripheral adaptations (also see Chapter 73). Improvement in VO2max has been reported to be between 6% and 30%, with significant improvements observed in as little as 3 weeks.
Skeletal Muscle Strength and Power
Aging is associated with a significant loss in skeletal muscle mass, and consequently in muscle strength (maximum force exerted) and power (rate of force development) in men and women, independent of physical activity status. The loss in muscle strength also accelerates with advancing age; by age 80, strength is approximately 30% to 40% of peak. The rate of decline in muscle power, which is a stronger determinant of physical function in older age than muscle strength, appears to be even greater than the decline in strength. The loss of muscle strength and power is associated with both mortality and physical disability in older adults; however, both can be increased by improving muscular function (eg, muscle cell hypertrophy) or by improving neurological function (eg, learning).
To a large extent, the age-associated loss of muscle strength and power reflects loss of muscle mass. However, other factors are clearly involved. In the Health, Aging and Body Composition (Health ABC) Study, for example, the decline in muscle strength over 3 years was three times greater than the parallel rate of decline in muscle mass. Regular physical activity may prevent or attenuate some of the age-related loss in strength. Importantly, vigorous strengthening exercise can produce substantial gains in strength in older adults.
A primary mechanism for the effects of resistance exercise is muscle hypertrophy, and older adults appear to achieve increases in muscle fiber size and cross-sectional area comparable to those in younger adults.
However, even modest increases in muscle fiber size or cross-sectional area are accompanied by proportionally greater increases in strength, again suggesting involvement of other mechanisms. Proposed mechanisms include learning effects from improvements in motor skill coordination, and neural adaptations such as increased voluntary muscle activation, reflecting recruitment, firing and synchronization of motor units, and better coordination of synergistic and antagonist muscle cocontraction.
Nonlinear Relationship Between Physiologic Impairment and Functional Limitation
Healthy humans have excess physiologic capacity not tapped during most daily activities and can thus lose a fair amount of physiologic reserve before functional limitations occur. Indeed, the relationship between leg strength and walking speed is nonlinear. This nonlinear relationship is intuitive: if
strength was linearly related to walking speed, highly trained weightlifters would walk ridiculously fast (16–30 km/h) because they are several times as strong as normal adults, who walk at around 5–8 km/h. That is, above a certain threshold level of adequate physiologic reserve, function is normal; below the threshold, function is impaired (Figure 54-2). The exact shape of the curve depends on the task and the physiologic capacity of interest. Figure 54-2 may oversimplify the situation because most tasks have multiple physiologic determinants that may interact to affect behavioral ability.
FIGURE 54-2. Theoretical relationship between physical fitness and functional status. The curvilinear relationship shows a threshold effect: above the threshold level of fitness, functional status is normal; below it, function is impaired. A curvilinear relationship implies that the benefit from exercise depends on the target group. Three hypothetical exercise studies are shown. Each study produces the same absolute improvement in fitness. In the frail adults of study 1, exercise produces a large improvement in functional status. In the healthy adults of study 3, no benefit on functional status is seen. Study 2 shows intermediate benefits. (Data from Buchner DM, Larson EB, Wagner EH, et al. Evidence for a non-linear relationship between leg strength and gait speed. Age Ageing. 1996;25[5]:386–391.)
To illustrate the concept of thresholds, consider that in steady-state measures of oxygen consumption, walking on a level grade at 5 km/h requires
3.2 METs (1 MET = 3.5 mL O2/kg/min). With illness or inactivity, aerobic capacity may fall below the 3.2 METs required for walking. Because exercise can increase aerobic capacity, it should improve walking in this
situation. However, aerobic exercise would not affect ability to walk at 5
km/h in adults whose aerobic capacity already exceeds 3.2 METs.
Thus, exercise may produce a large improvement in function in frail adults with little or no physiologic reserve. Indeed, well-designed studies of exercise in frail adults report improvements in functional limitations with exercise. For example, the Lifestyle Interventions and Independence for Elders (LIFE) study of frail older adults (70–89 years) demonstrated that a structured, moderate-intensity physical activity intervention reduced the risk of mobility disability compared to a control health education program. The greatest risk reduction (HR = 0.75) was observed in older adults with lower physical function at baseline, compared to those with moderate functional impairments at baseline (HR = 0.95). In contrast, exercise would be expected to have much smaller effects on functional limitations in healthy older adults, at least in basic activities of daily life.
EXERCISE AND COMMON GERIATRIC DISORDERS
A number of common geriatric disorders can be improved with exercise, although many questions remain about the type and intensity of exercise, sex differences in response to exercise, and mechanisms underlying the benefits. Table 54-1 summarizes the current knowledge of the effects of different exercise modalities on several common geriatric disorders.
TABLE 54-1 ■ EXERCISE EFFECTS IN COMMON GERIATRIC DISORDERS
Cardiovascular Diseases
Although the mortality rates attributable to cardiovascular diseases (CVD) have declined, CVD burden in older adults remains high. There is a consistent, independent inverse relationship between aerobic fitness and physical activity levels and CVD mortality. Both short bouts and longer continuous sessions of physical activity are equally effective for reducing coronary heart disease (CHD) risk, provided the total energy expended is similar. Although a dose response has been demonstrated in some studies, even low to moderate physical activity can reduce risk for CVD mortality, with no additional protection from participating in more vigorous activities.
In contrast to the overwhelming evidence of the benefit of aerobic exercise to reduce CVD risk, few data exist on the association between strength training and CVD risk. Strength training is generally associated with reduced CVD mortality; however, the relationship may be J-shaped, with low to moderate levels (1–50 minutes per week) associated with the lowest risk of CVD mortality.
Similar to physical activity, fitness has been reported as being at least as good a predictor of both overall and CVD-related death as blood pressure, obesity, smoking, or diabetes. Because of the reported benefit of increasing physical activity and fitness levels on CVD risk, aerobic exercise and strength training are typically prescribed for older adults with CVD as part of a comprehensive cardiac rehabilitation program to increase aerobic capacity, improve CVD risk factors, prevent disease progression, and reduce the risk for future cardiovascular events and death. In fact, individuals with stable coronary artery disease randomized to exercise training improved their aerobic capacity and had a higher event-free survival (reduced repeat revascularization and hospitalizations) than those receiving percutaneous coronary intervention.
Both aerobic exercise and strength training improve aerobic exercise capacity to a similar degree in patients with CHD, and the improvements are enhanced when the two exercise modalities are combined. Aerobic exercise and strength training are also beneficial in improving exercise tolerance and quality of life in other CVD patient populations including heart failure (see Chapter 76) and peripheral arterial disease (PAD, see Chapter 78).
Supervised aerobic exercise training is associated with reduced mortality (35%) and hospital readmission (28%) in patients with chronic heart failure. In patients with PAD, any physical activity greater than light intensity is
associated with reduced mortality compared to no physical activity or only light-intensity activities. Because walking is effective in improving absolute claudication distance (ACD), guidelines recommend that individuals with PAD participate in a supervised walking program that gradually increases speed and distance as an initial noninvasive approach, using walking claudication pain—either onset of pain, or mild to moderate or maximal level tolerable—for prescribing exercise intensity. Other modes of exercise also improve ACD. However, the data are insufficient, particularly with regard to resistance training, to make any clinical recommendations. Thus, there is a need for further definitive, large clinical trials of alternative or complementary exercise interventions for PAD in older adults.
Exercise training programs in CVD patients have numerous physiologic effects on the cardiovascular system and CVD risk factors (Table 54-2).
Improvements in VO2max of 15% to 25% are driven by either central (ie, improved myocardial oxygenation, enhanced left ventricular function) and/or peripheral adaptations, depending on the population. For example, in coronary artery disease (CAD) and heart failure patients, those with depressed left ventricular function may not show cardiac adaptations, but rather, the improvements in VO2max are largely attributed to improvements in maximal arteriovenous oxygen difference due to improved mitochondrial oxidative enzymatic capacity.
TABLE 54-2 ■ EFFECTS OF EXERCISE TRAINING ON THE CARDIOVASCULAR SYSTEM AND CARDIOVASCULAR RISK FACTORS
In addition to improving multiple CVD risk factors, exercise training programs reduce arterial stiffness and improve vascular endothelial function in patients with CVD, measures that are important for overall vascular homeostasis. In general, aerobic exercise training programs improve vascular function in older adults independent of disease state. However, there are noted sex differences between the magnitude of improvement with exercise training, with near restoration of endothelial function to youthful levels in healthy older men, but attenuated or minimal improvements in postmenopausal women not on estrogen-based hormone therapy. Exercise training may also increase circulating bone marrow derived endothelial progenitor cells (EPCs) and improve EPC function. EPCs facilitate vascular repair and vasculogenesis, and are reduced in CVD patients. Other potential mechanisms of exercise training on CVD could be related to regression of coronary artery stenosis and formation of collateral vessels, resulting in enhanced myocardial perfusion.
The benefits of exercise on CVD risk are thought to be due at least in part to improvements in lipid profiles. Higher levels of physical activity are associated with less atherogenic lipoprotein profiles in young and middle- aged individuals, although data in older adults are lacking. There is no consistent effect of resistance training on lipoproteins.
Although exercise training is beneficial overall for CVD patients, there are a few conditions in which exercise may not be as beneficial and actually contraindicated. These conditions include unstable angina, uncontrolled hypertension, uncontrolled cardiac arrhythmias, recent myocardial infarction,
severe aortic stenosis and other valvular diseases, acute pericarditis, and decompensated heart failure.
Hypertension
Hypertension is covered in Chapter 79. Over 80% of adults age 60 or older with hypertension are treated with antihypertensive therapies, and more than 30% of hypertensive adults age 80 or older are taking three or more classes of antihypertensive medications. Although antihypertensive medications produce long-term benefit, these medications can have numerous side effects, especially in older patients. Thus, using nonpharmacologic treatments (eg, weight loss, low-sodium diet, and exercise) is appealing, especially in older adults with mild to moderate hypertension.
Both leisure-time activity and aerobic exercise capacity (ie, VO2max) are inversely related to risk for hypertension, with the lowest risk for hypertension in the most active and fit people. Hypertensive older Swedish men (born in 1914) who regularly exercised vigorously had an adjusted relative risk of 0.43 for total mortality and 0.33 for cardiovascular mortality. Given the apparent benefits of increased physical activity on aerobic exercise capacity and reduced hypertension risk, the Eighth Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure (JNC 8) recommends increased physical activity designed to enhance aerobic exercise capacity as initial therapy to prevent, treat, and control hypertension.
Aerobic exercise training programs effectively lower blood pressure in older adults with mild to moderate hypertension. Overall, aerobic exercise training programs or increasing physical activity of moderate intensity and of adequate volume result in a 4 to 10 mm Hg decline in systolic blood pressure and a 3 to 8 mm Hg decline in diastolic blood pressure in individuals with stage 1 hypertension, regardless of age or sex. The effects of aerobic exercise training in older adults with stage 2 hypertension, or those with resistant hypertension are less clear. In resistant hypertension, moderate- intensity exercise training reduced 24-hour ambulatory blood pressure similarly to that reported for mild to moderate hypertensives. Although there are limited data comparing different exercise intensities, low- to moderate- intensity exercise may be more effective at lowering blood pressure than more vigorous exercise. Indeed, even light Tai chi exercise or increasing
daily walking can reduce and even normalize blood pressure in sedentary older hypertensive patients.
There is limited and inconsistent data on the effect of strength training on blood pressure in older hypertensive adults. Excessive blood pressure elevations can occur during high-intensity resistance training, especially if associated with Valsalva. However, with low- to moderate-intensity strength training and proper breathing techniques, this is not a concern. Overall, progressive resistance training results in a 3 mm Hg decrease in both systolic and diastolic blood pressure. However, the effect of resistance training on systolic blood pressure in adults older than 50 years and on systolic and diastolic blood pressures in hypertensives is less and not significant. Given the small number of studies available in hypertensive older adults, additional research is warranted to provide recommendations on the effect of resistance training for blood pressure reduction in hypertensive older adults.
The mechanisms underlying the blood pressure lowering effects of exercise in hypertensive adults are not completely understood, although changes in body composition, sympathetic nervous system activity, peripheral vascular resistance, and insulin levels have all been postulated. For example, weight loss alone or combined with exercise provides a greater blood pressure lowering effect than exercise alone. Aerobic exercise training decreases arterial stiffness and improves endothelial function in hypertensive adults. However, whether the decreases in blood pressure with exercise are due to improvements in arterial stiffness and endothelial function, or vice versa, are unclear.
Obesity
As described in Chapter 30, obesity is associated with increased morbidity and mortality from multiple medical complications, and has been consistently correlated with mobility impairment and poor physical function, especially in older adults. While weight loss alone can improve cardiometabolic risk factors, fitness, function, and quality of life in obese older adults, it is also associated with loss of fat-free mass (FFM) and decreases in bone mineral density (BMD). Given these potential harms, there is general agreement that exercise should be a component of any weight loss intervention, in order to minimize loss of muscle and bone with weight loss in older obese adults.
Diet plus exercise interventions improve multiple cardiometabolic and functional outcomes in older adults, although effects on cardiovascular
events or mortality have not been demonstrated yet. For example, the Diabetes Prevention Program (DPP) lifestyle intervention of moderate- intensity exercise and modest weight loss (7%) was very effective in preventing progression to diabetes in older (60–85 years) participants. Compared to younger people in the DPP, older participants had greater reductions in waist circumference, lost more weight, and were more likely to achieve the goal of at least 150 minutes of exercise per week.
A randomized, controlled trial by Villareal et al. provides the most compelling evidence for including exercise in any weight loss intervention in older obese adults. They compared 52 weeks of weight loss diet, exercise (aerobic and strength training), or both in more than 100 older obese adults with mild to moderate functional impairment. Participants receiving the diet intervention lost weight, whereas weight remained stable in the exercise only group. Physical and metabolic function improved in all groups, but was greatest in the diet plus exercise group. The expected decreases in FFM and BMD with weight loss were attenuated in the diet plus exercise group, but not completely reversed. Because the negative effects of weight loss on body composition and BMD in older obese adults are not fully offset by exercise, an important question is whether weight loss per se is necessary, or whether exercise alone can confer most of the benefits without the associated risks of weight loss in this population.
Osteoporosis
The decline in BMD with age is well recognized (see Chapter 51), and there is an epidemic of osteoporotic fractures that affects women more than men and that primarily involves hip, vertebral, and wrist fractures. Mechanical loads are critical to skeletal integrity. Animal studies confirm the importance of mechanical strain to bone modeling and remodeling. They have also identified that (1) the osteogenic response to mechanical loading is optimized with relatively few repetitions of high-magnitude forces; (2) the application of force must be in a dynamic, rather than static, fashion; and (3) fast strain rates are more osteogenic than slow strain rates. Furthermore, the osteogenic response to mechanical loading is mediated locally in skeletal regions subjected to the strain, highlighting the need for exercises to specifically target regions of the skeleton at risk of fracture. It appears that bone cells lose sensitivity to mechanical loading after relatively few loading cycles.
For example, four sets of 90 load applications per day were more osteogenic
than one set of 360 loading cycles, and interposing an 8-hour recovery period between loading sessions resulted in a greater osteogenic response than when the recovery period was 0.5 hour. These concepts have not been rigorously evaluated in humans, but they suggest that multiple short bouts of exercise per day may be more effective in preserving bone health than a single, longer daily session. The concept of allowing bone to “rest” between loading cycles may also have implications for resistance training. For example, if the intent is to perform three sets of eight repetitions of several exercises, it might be of greater benefit to bone to perform one set of each exercise and then cycle back through for the second and third sets, rather than doing three consecutive sets of each exercise.
The effects of exercise on bone mass have been estimated by comparing athletes who participate in different sports or physically active versus sedentary people. For example, the humerus in the dominant arm of tennis players had approximately 30% greater cortical bone thickness when compared to the nondominant arm. Athletes who participate in muscle- building activities (eg, weight lifting and body building) and in activities that involve jumping or vaulting (eg, volleyball and gymnastics) also tend to have elevated BMD. These data suggest that exercise in younger individuals may increase peak bone mass—an important protective factor for reducing osteopenia later in life. In contrast, the primary benefit of physical activity in older adults may be to preserve, rather than increase, bone mass.
Although both weight-bearing aerobic exercise and resistance exercise can increase bone mass in older women and men, increases in BMD appear to require a vigorous level of exercise, which supports the contention that the osteogenic response to exercise is attenuated with aging. In postmenopausal women, exercise may increase BMD by up to 5%, as compared to no- exercise controls, but average increases generally are only 1% to 3%. These modest increases may reflect difficulty reaching the strain threshold needed for bone remodeling to occur due to decreased muscle mass and strength, declines in hormones and growth factors that affect the sensitivity of bone to strain, and/or insufficient calcium and vitamin D.
There is also evidence that exercise can prevent fractures. Results from prospective cohort studies, such as the Nurses’ Health Study and the Study of Osteoporotic Fractures, suggest that the risk for hip fracture is reduced by approximately 50% in the most physically active quintile of the population.
In evaluating the potential benefit of simple walking activity, the Nurses’
Health Study found a dose-response benefit for both duration and speed of walking, such that more hours per week spent walking and fast walking speed conferred reduced fracture risk. Even hours per week spent standing was inversely related to hip fracture incidence. Perhaps most importantly, changes in physical activity for several years were related to hip fracture incidence in the expected manner: decreases in physical activity were associated increased risk and vice versa. Such findings suggest that any type of ambulatory physical activity may confer skeletal benefits and that increased physical activity should be advocated at any age as a strategy to reduce fracture risk.
The low frequency of fractures makes well-designed, adequately powered randomized controlled trials extremely expensive. Very few exercise trials have included fractures as an outcome, or reported fractures as an observation. Analysis of 11 exercise intervention studies that reported fractures found that exercise reduced the risk of overall fracture by approximately 50% and vertebral fracture by approximately 40% compared to controls, although the authors urged caution due to the strong likelihood of publication bias. The Erlangen Fitness and Osteoporosis Prevention Study compared fracture incidence among 137 postmenopausal osteopenic women either participating in a supervised exercise intervention (two group and two home sessions per week), or a sedentary control group for 12 years.
Although fewer fractures occurred in the exercise group, the difference between groups did not reach statistical significance.
Important areas for future research include determining the effects of age and sex in the response of bone to exercise. In addition, new methods such as peripheral quantitative computed tomography may provide important information on bone quality (eg, strength and geometry) in response to exercise.
Cancer
Cohort and case control studies consistently support an inverse relation between physical activity and overall cancer incidence and mortality. The effects appear to be stronger in men than in women, but many studies excluded women. Overall, occupational and leisure physical activities were associated with a 30% independent protective effect on overall cancer risk in men, and men who engaged in regular vigorous activities had a 20% reduction in cancer risk compared to sedentary men. Analysis of more than
15,000 cancer deaths over 12 years in the NIH-AARP Diet and Health Study found that compared to those who reported rarely or never engaging in moderate to vigorous physical activity, those who reported more than 7 hours/week had a lower risk of cancer mortality (HR = 0.89).
In persons diagnosed with cancer, exercise is associated with improved surgical outcomes, decreased symptoms and side effects of treatment, and improved physical function and psychological health. Participation in exercise post cancer diagnosis has been estimated to increase cancer survivorship by 50% to 60%.
To date, the most compelling data involve the relationship between activity and risk of colon or breast cancer. A reduced risk of colon cancer (relative risk, 0.4–0.9) has been found in groups with high physical activity levels compared to those with low levels. In general, controlling for other cancer risks including tobacco, alcohol, age, obesity, and diet does not diminish the relationships, and this finding has been demonstrated in men and women of different racial and ethnic groups. Among patients with colon cancer, any prediagnosis physical activity was associated with a 25% reduction in colon-cancer mortality, and a 26% reduction in all-cause mortality compared to patients who reported no physical activity prior to diagnosis. Moderate to high levels of postdiagnosis physical activity were also associated with reductions in all-cause mortality ranging from 24% to 39%.
Physical activity reduces the risk of breast cancer overall by 15% to 20%. The strongest effect appears to be in postmenopausal breast cancer, where the risk reduction ranges from 20% to 80% when comparing the most active to the least active groups. These associations remain even after adjusting for potential confounding factors such as age, body mass index (BMI), reproductive history, tobacco use, and diet. There appears to be a dose response, with higher levels of activity associated with greater risk reduction. As with colon cancer, exercise in breast cancer patients is associated with better physical function and fewer treatment-related side effects.
Depressed Mood
Subjects who exercise regularly consistently report an improved sense of well-being and reduced tension and anxiety. Large epidemiological studies have demonstrated an association between low levels of physical activity
and depressive symptoms. These studies have also shown physical activity to be an independent predictor of the development of depressive symptoms. In the National Health and Nutrition Education Survey I Epidemiologic Follow- Up Study, low levels of physical activity predicted depressive symptoms 8 years later in White women, with those reporting little to no physical activity twice as likely to develop depressive symptoms as those reporting moderate to high levels.
As for treatment for clinical depression, structured, mixed exercise programs result in a modest reduction in symptom severity. The effects of exercise appear to be comparable to those of behavioral or pharmacologic therapy. Proposed mechanisms for the effects of exercise on mood include increases in endorphin and monoamine levels, as well as social engagement and improvements in self-efficacy and self-esteem.
Impaired Cognition
There may be a positive effect of exercise on cognition. Numerous cross- sectional and longitudinal studies including the Nurse’s Health Study and the Mayo Clinic Study on Aging have found an association between higher levels of physical activity and both better cognitive test scores and reduced risk of cognitive impairment and dementia. Among 1200 older adults in the Framingham Study, a hazard ratio of 0.55 was found for incident all-cause dementia after 10 years of follow-up in participants reporting moderate to heavy physical activity compared to less active participants. Among those in the lowest quintile of physical activity, the hazard ratio for incident dementia was 1.45.
Results from exercise intervention studies have been mixed, although the balance of evidence supports moderate positive effects of exercise training on cognition in both cognitively normal and cognitively impaired individuals. In older adults without known cognitive impairment, 8 of 11 studies using aerobic exercise interventions found positive effects on cognitive function that coincided with improvements in maximal oxygen uptake (VO2max). A meta-analysis of 16 trials in older patients with dementia found a significant positive effect on cognitive function with exercise training, although the authors emphasized caution given the substantial heterogeneity between trials.
Some studies have also reported cognitive benefits with resistance training alone, but the greatest cognitive effects were observed with mixed
training paradigms. Exercise appears to improve function in multiple cognitive domains, with the strongest effects in executive control functions. Important areas for further study include determining the duration and intensity of exercise needed to realize cognitive benefits, and potential sex differences in the cognitive responses to exercise.
The bases for the exercise-associated improvements in cognition are presently an active area of investigation. Animal studies support a neuroprotective effect of exercise through promotion of neuroplasticity, increases in brain-derived neurotrophic factor and enhanced neurogenesis.
Sleep Disorders
Age effects on sleep and sleep disorders are covered in Chapter 44. Physical activity and regular exercise are beneficial for sleep in older adults. Higher levels of cardiorespiratory fitness and physical activity in older adults are associated with better self-reported sleep quality. For instance, objectively measured indices of sleep quality (eg, sleep onset latency, wake time after sleep onset, sleep efficiency) are better in physically fit older men compared to sedentary controls, and higher levels of physical activity are protective against insomnia in older adults. Although few longitudinal exercise studies have specifically focused on older adults, a systematic review and meta- analysis of six studies in adults older than 40 years demonstrated that 10 to 16 weeks of aerobic exercise has positive effects on self-reported sleep quality measured using the Pittsburgh Sleep Quality Index.
It appears that slow wave sleep (SWS), which plays a role in the restorative functions of sleep, is particularly sensitive to exercise training. Older physically fit men have more SWS than sedentary older men, and exercise training increases SWS in older men and women. One longer-term study (12 months) demonstrated that in older adults with mild to moderate sleep problems, exercise was associated with decreases in stage 1 sleep, increases in stage 2 sleep, and fewer awakenings.
There remain many questions regarding the effects of exercise on sleep in older adults. For example, most studies in older adults have examined the effects of aerobic training, but at least one study has shown that progressive resistance training can improve subjective measures of sleep quality. It is not known if aerobic, resistance, or a combination is most effective for improving sleep. The effects of exercise intensity have not been established, but lower intensity activities such as yoga and Tai chi also improve self-
reported measures of sleep quality. Similarly, lower-intensity exercise (eg, stretching and flexibility) was associated with greater improvements in self- reported sleep quality than higher-intensity exercise performed at 60% to 75% of maximal heart rate. The effects of frequency of exercise have also not been defined, but a study in older men (~64 years) showed that after a 16- week exercise training period, SWS increased on a day when exercise was performed, but not on a sedentary day. These findings suggest that physical activity should be done on most days of the week to maximize the benefits on sleep. Whether exercise has different effects in men and women is not known.
It should also be recognized that there may be a reciprocal relationship: poor sleep is associated with lower levels of physical activity. For example, in sample of subjects from the Healthy Women Study (~73 years), greater self-reported sleep efficiency and less sleep fragmentation were associated with higher physical activity levels the following day. Thus, there appears to be a circular association whereby increased physical activity improves sleep, and improved sleep facilitates greater physical activity levels.
Parkinson Disease
Exercise is associated with reduced mortality rate and with improvements in functional mobility and activities of daily living in individuals with Parkinson disease (see Chapter 61). Though data are limited, controlled trials of both resistance and aerobic training in individuals with Parkinson disease have demonstrated improvements in mobility, gait, and quality of life. The intensity of exercise required to realize benefits in Parkinson disease is an area of active investigation, as some studies have found greater improvements with a higher dose of exercise, while others have not.
Despite impaired force production and increased fatigability, individuals with Parkinson disease are able to participate safely and effectively in high- intensity resistance exercise, and appear to derive benefits similar to those in neurologically healthy adults. In one study, 12 weeks of high-force eccentric resistance training in adults with Parkinson disease was associated with a 6% increase in muscle hypertrophy, a 24% increase in torque, and an 18% improvement in 6-minute walk time compared to a standard exercise program. Aerobic exercise interventions—mainly treadmill exercise— consistently improve gait, mobility, and quality of life.
Stroke
Stroke is a leading cause of disability in older adults, with 20% to 25% of stroke survivors requiring at least some assistance with activities of daily living (see Chapter 62). These functional deficits predispose patients to a sedentary lifestyle, further increasing the risk of stroke and CVD, the leading causes of death among stroke survivors. Physical rehabilitation in stroke patients has traditionally focused on the first few months after the event, as prevailing wisdom held that further functional improvements were unlikely after this time. However, improvements in aerobic capacity and sensorimotor function can be achieved with continued rehabilitation. Physical activity and exercise recommendations for stroke survivors from the American Heart Association identify three primary goals: (1) preventing complications from prolonged inactivity; (2) decreasing the risk of recurrent stroke and cardiovascular events; and (3) increasing aerobic fitness. Specific recommendations for aerobic exercise and strength training are similar to those recommended for all adults. In addition, incorporation of stretching and balance/coordination activities is recommended to prevent muscle contractures, and to improve the ability to safely perform activities of daily living.
RISKS ASSOCIATED WITH EXERCISE
Despite the overwhelming evidence that exercise confers multiple benefits and can be performed safely even in frail older adults, concerns about potential injury and other adverse events are common among patients and health care providers. These fears may prevent patients from engaging in the amount and intensity of exercise needed to realize health benefits, thus presenting a potential barrier to increasing physical activity levels.
“Overuse” injuries involving soft tissues are by far the most common and are likely to increase with advancing age. Eccentric exercise (eg, lowering a weight that has been lifted) may predispose to excess muscle injury.
Appropriate warm-up and cool-down periods, as well as emphasis on stretching and flexibility, are likely to be especially important in reducing soft-tissue injury in an older population.
The prevalence of asymptomatic coronary artery disease is higher in older compared to younger adults, and there is a transient increase in the risk of sudden death occurring during a bout of vigorous exercise, especially in
previously sedentary individuals. However, the small increased risk of a cardiovascular event with exercise is clearly outweighed by the reduction in risk during the nonexercising period of the day. Thus, the overall risk of sudden death in active men is only 30% of that in sedentary men.
Furthermore, there is lower cardiovascular-related and overall mortality in active individuals.
In older patients with diabetes mellitus, careful attention to the possibility of exercise-induced hypoglycemia is critical because of the sustained improvement in insulin sensitivity for 24 to 48 hours following vigorous exercise. Meticulous foot care and supportive, well-fitted shoes are also important for the exercising diabetic patient. Patients with proliferative retinopathy should avoid anaerobic (specifically isometric) exercise, such as power lifting, because of the increased ocular and systemic pressure occurring with the Valsalva maneuver.
In the past, a pre-exercise assessment, including a complete history and physical examination, and an exercise stress test were recommended for adults older than 60 before vigorous exercise. Such an evaluation is expensive and could present a significant barrier to exercise. Furthermore, there are little data to substantiate this recommendation. Current recommendations suggest that asymptomatic individuals who plan to exercise at moderate intensity do not need screening before initiating an activity program. The geriatric medicine axiom of “start low and go slow” should be applied to beginning a safe physical activity program. It is appropriate for patients to notify their health care providers of their intent to begin an exercise program, because adjustments in medications or dosages may be necessary. The health care provider can also work with the individual to tailor the exercise program to the individual’s abilities and be a source of ongoing support and encouragement.
RECOMMENDATIONS
There is now ample evidence that older adults can safely engage in both aerobic and resistance exercise training, and that exercise has beneficial effects on numerous important health-related end points in this population. Despite these acknowledged benefits, older adults are the least physically active age group. Potential barriers to exercise in older adults include perceptions of safety, chronic conditions and physical limitations, and access to exercise programs or facilities.
Consensus recommendations for physical activity in older adults are summarized in Table 54-3 and were updated in 2018 by the Department of Health and Human Services (DHHS) in the Physical Activity Guidelines for Americans, 2nd edition. The DHHS guidelines stress the importance of avoiding sedentary behavior, and that any amount of physical activity provides some health benefits. The specific modifications for older adults include: (1) being as physically active as their conditions allow if the target of 150 min/week is not possible because of chronic conditions; (2) engaging in balance exercises for individuals at risk of falling; (3) determining the level of effort for physical activity relative to level of fitness; and (4) understanding how chronic conditions may affect the ability to safely engage in regular physical activity. Key to recommendations for exercise in older adults is the need to individualize the program of physical activity. While there is a dose-response relationship between the intensity of exercise and the improvement in most outcome measures, significant cardiovascular and metabolic improvements can be obtained when less intense exercise is maintained over sufficiently long periods of time. Less intense exercise regimens may be more acceptable to some older individuals and appear to make long-term compliance more likely. Low-intensity programs may be necessary in frail populations, such as those with stroke, heart failure, and chronic lung disease, in which more intensive exercise is not tolerated.
While multiple short bouts of exercise (at least 10 minutes each) may be effective in improving certain outcome measures, longer sessions remain preferable if these can be tolerated.
TABLE 54-3 ■ SUMMARY OF PHYSICAL ACTIVITY RECOMMENDATIONS FOR OLDER ADULTS
Because the physical activity goals may seem daunting, especially for inactive older adults, specific instructions are more helpful than general advice to “increase physical activity.” Provision of written exercise prescriptions from health care providers increases activity levels among previously sedentary adults. In addition, an increasing array of technologies allows individuals to set activity goals and to monitor and track activity in real time.
Changes in lifestyle are difficult to maintain at any age, and recidivism rates for exercise programs are high. This problem may be reduced by: (1) careful attention to warm-up periods and slow progression in an effort to reduce early injuries; (2) enthusiastic leadership; (3) regular assessment of improvement with personalized feedback and praise; (4) spousal and family support for participation; (5) flexible goals (time rather than distance) set by the participant; and (6) use of distraction techniques such as music.
ACKNOWLEDGMENTS
Dr. Robert Schwartz and Dr. Wendy Kohrt wrote this chapter in the 6th edition, and Dr. Edward Melanson contributed to the 7th edition chapter. Some material from the previous versions has been retained in this edition. The authors also gratefully acknowledge the advice and encouragement of Drs. Schwartz and Kohrt in the preparation of this chapter.
FURTHER READING
Billinger S, Arena R, Bernhardt J, et al. Physical activity and exercise recommendations for stroke survivors. A statement for healthcare professionals from the American Heart Association/American Stroke Association. Stroke. 2014;45(8):2532–2553.
Bottaro M, Machado SN, Nogueira W, Scales R, Veloso J. Effect of high versus low-velocity resistance training on muscular fitness and functional performance in older men. Eur J Appl Physiol. 2007;99:257–264.
Chodzko-Zajko WJ, Proctor DN, Fiatarone Singh MA, et al. Exercise and physical activity for older adults. Med Sci Sports Exerc.
2009;41(7):1510–1530.
Diabetes Prevention Program Group. Reduction in the incidence of type 2 diabetes with lifestyle intervention or metformin. N Engl J Med.
2002;346:393–403.
Feskanich D, Willett W, Colditz G. Walking and leisure-time activity and risk of hip fracture in postmenopausal women. JAMA. 2002;288:2300–2306.
Hollings M, Mavros Y, Freeston J, Singh MF. The effect of progressive resistance training on aerobic fitness and strength in adults with coronary
heart disease: a systematic review and meta-analysis of randomised controlled trials. Eur J Prev Cardiol. 2017;24(12):1242–1259.
Hupin D, Roche F, Gremeaux V, et al. Even a low-dose of moderate-to- vigorous physical activity reduces mortality by 22% in adults aged ≥60 years: a systematic review and meta-analysis. Br J Sports Med.
2015;49:1262–1267.
Kesaniemi YK, Danforth E Jr, Jensen MD, Kopelman PG, Lefebvre P, Reeder BA. Dose-response issues concerning physical activity and health: and evidence-based symposium. Med Sci Sports Exerc.
2001;33:S351–S358.
Kohrt WM, Bloomfield SA, Little KD, Nelson ME, Yingling VR. American College of Sports Medicine Position Stand: physical activity and bone health. Med Sci Sports Exerc. 2004;36:1985–1996.
Kramer AF, Erickson KI, Colcombe SJ. Exercise, cognition, and the aging brain. J Appl Physiol. 2006;101:1237–1242.
Kraschnewski JL, Sciamanna CN, Poger JM, et al. Is strength training associated with mortality benefits? A 15 year cohort study of US older adults. Prev Med. 2016:87;121–127.
Latham NK, Bennett DA, Stretton CM, Anderson CS. Systematic review of progressive resistance strength training in older adults. J Gerontol A Biol Sci Med Sci. 2004;59A:48–61.
Orchard TJ, Temprosa M, Goldberg R, et al. The effect of metformin and intensive lifestyle intervention on metabolic syndrome: the Diabetes Prevention Program randomized trial. Ann Intern Med. 2005;142:611– 619.
Pahor M, Guralnik JM, Ambrosius WT, et al. Effect of structured physical activity on prevention of major mobility disability in older adults: the LIFE study randomized clinical trial. JAMA. 2014;311:2387–2396.
Rejeski WJ, Bray GA, Chen SH, et al. Aging and physical function in type 2 diabetes: 8 years of an intensive lifestyle intervention. J Gerontol A Biol Sci Med Sci. 2015;70(3): 345–353.
Tanaka H, Seals DR. Endurance exercise performance in masters athletes: age-associated changes and underlying physiologic mechanisms. J
Physiol. 2008;586(1):55–63.
US Department of Health and Human Services. Physical Activity Guidelines for Americans, 2nd edition. Washington, DC: US Department of Health and Human Services; 2018.
Villareal DT, Chode S, Parimi N, et al. Weight loss, exercise, or both and physical function in obese older adults. N Engl J Med. 2011;364:1218– 1229.
von Stengel S, Kemmler W, Kalender WA, Engelke K, Lauber D. Differential effects of strength versus power training on bone mineral density in postmenopausal women: a two year longitudinal study. Br J Sports Med. 2007;41:649–655.
Williams MA, Haskell WL, Ades PA, et al. Resistance exercise in individuals with and without cardiovascular disease: 2007 update. Circulation. 2007;116:572–584.
Zoico E, Di Francesco V, Guralnik JM, et al. Physical disability and muscular strength in relation to obesity and different body composition indexes in a sample of healthy elderly women. Int J Obes Relat Metab Disord. 2004;28:234–241.
Chapter
Rehabilitation
Cynthia J. Brown
DEFINING REHABILITATION
The purpose of rehabilitation is to restore some or all of a person’s physical and mental capabilities that have been lost as a result of disease, injury, or illness and to help achieve the highest possible level of function, independence, and quality of life. The techniques and modalities used to achieve these goals are numerous and typically do not differ for younger versus older persons. However, rehabilitation outcomes and approaches are frequently different for the older adult. For example, most young adults experience a single acute event that results in disability. Older adults are more likely to have multiple comorbid conditions that, over time, result in disability. Even if the older persons have acute events, like a hip fracture or a stroke, their underlying comorbid conditions may impact on the outcomes of rehabilitation. Older patients may also have subclinical physical or cognitive comorbidities, which become evident when challenged by a new disability. For example, mild cognitive impairment may be first recognized during rehabilitation after a hip fracture, when the patient has difficulty learning how to use a new assistive device.
Goals of rehabilitation for older adults usually focus on recovery of self- care ability and mobility, while for younger persons reentering the workforce or returning to school may be the goal. In general, recovery for older adults requires a longer period of time to achieve, and functional outcomes are usually worse when compared with younger adults. It is important to discuss rehabilitation goals with all patients and focus therapy toward achieving those goals. For example, older persons may have been avid golfers or fishermen, and return to this activity may be important for their quality of life.
Rehabilitation efforts and goals of care may also be impacted by a person’s values and beliefs about exercise and social roles. For example, if a patient has never cooked and does not believe that this is an important task to learn, taking the patient to the kitchen to learn how to prepare a meal may be viewed as a useless task. Participation by the patient and family in the development of the goals of rehabilitation is critical to achieve a successful outcome.
Disability is common in older persons and can have a significant impact on function and quality of life. In order to better understand the process of disablement, a variety of theoretical models have been explored and are presented below.
Learning Objectives
Name and describe the conceptual models used as the framework for rehabilitation.
Explain the advantages and disadvantages of each site of care for rehabilitation.
List providers who are commonly members of the interprofessional rehabilitation team.
Identify common rehabilitation interventions, in addition to exercise.
Key Clinical Points
In addition to the history and physical examination, an evaluation should include assessments of cognition, motivation, depression, social support, and financial resources, as these factors can have a significant impact on rehabilitation outcomes.
It is important for the provider to understand the range of available rehabilitation settings, both inpatient and community based, and the advantages and disadvantages of each setting.
An interdisciplinary team is often required to meet the complex rehabilitation needs of older patients, and while team members have defined roles and functions, there is considerable overlap in the services provided.
Name at least two types of adaptive aids and how they are used.
Exercise is the cornerstone of physical rehabilitation, with each prescribed exercise being related to achievement of a goal and ultimately to an improvement in function.
Adaptive aids include devices that allow persons with physical limitations to participate in activities, such as basic and instrumental activities of daily living, with greater ease and/or less pain. Categories of adaptive aids include mobility aids to assist people to move around within their home and community, bathroom aids to assist with bathing and toileting, and self-care aids that assist with dressing, personal hygiene, cooking, and other activities.
History of the Disability Framework
In an attempt to provide a framework for the discussion of the consequences of disease and injury, Nagi developed the first disablement model in the 1960s (Figure 55-1). The model uses four related yet distinct phenomena considered by Nagi to be the basis of rehabilitation and include active pathology, impairment, functional limitation, and disability. Active pathology was described as a disruption in the normal cellular function and the body’s efforts to regain a normal state. Impairment, which usually results from active pathology, referred to an abnormality or loss at the tissue or organ level.
Functional limitation described restrictions at the individual level, while disability described a physical or mental limitation in a social context.
Nagi’s view of disability was a product of the interaction between individuals and their environment. Importantly, individuals could have similar functional impairments that result in different patterns of disability, depending on the environment in which they function.
FIGURE 55-1. Schematic of the Nagi disablement model with definitions. The first disablement model was described by Nagi in the early 1960s. The initial disablement model focused on a
linear progression to disability and has been replaced over time with new models such as the International Classification of Functioning, Disability, and Health (ICF). Importantly, the Nagi model was the first attempt to describe the process of disability.
In 1980, the World Health Organization’s (WHO) International Classification of Impairments, Disabilities, and Handicaps (ICIDH) was developed in Europe. Like Nagi’s disablement model, the ICIDH characterized three distinct concepts related to disease and health conditions: impairments, disabilities, and handicaps. While the ICIDH was developed to classify function and disability, it failed to receive endorsement by the World Health Assembly. A major criticism of these early disablement models was that these presented the response to disease or illness as a static process with a linear progression through the disablement process. It was recognized that the interaction between disease and disability is more complex, particularly for older persons. Recognition of this complexity led to significant dialogue within the rehabilitation community and to a major revision of the ICIDH.
In 2001, the WHO released the International Classification of
Functioning, Disability, and Health (ICF) (Figure 55-2), which attempted to incorporate, from a biological, personal, and social perspective, a biopsychosocial view of health. The ICF characterizes decreases in function as the consequence of a dynamic interaction between various health conditions and contextual factors. Health conditions are described as diseases, disorders, injuries, or aging. Contextual factors are divided into two categories: environmental factors and personal factors. Environmental factors include the physical, social, and attitudinal environment in which people live. These might include individual environment like furniture placement in the home or societal environment like policies regarding access to buildings. Personal factors are characteristics of the individual, which are not part of the health condition or illness. These might include gender, fitness, or coping styles. Listed across the center of the model are the three domains of human function: body functions and structures, activities, and participation. Body functions and structures are the physiologic functions and the anatomic parts of the body. The execution of a task or action by a person is an activity, while participation is the application to a real-life activity. For each of these three domains of human function, there are several levels on which the function can be experienced. These include functioning at the level of the body or body parts and the level of the whole person and the whole
person in their environment. Disability is defined as any decline at any of these levels.
FIGURE 55-2. International Classification of Functioning, Disability, and Health (ICF). The latest version of the ICF focuses on the interaction between various factors and the impact these factors have on health and functioning. Prior models focused on disability and portrayed the path to disability as a linear process. This model attempts to incorporate, from a biological, personal, and social perspective, a biopsychosocial view of health. (Reproduced with permission from World Health Organization. Towards a Common Language for Functioning, Disability and Health. Geneva, Switzerland: ICF; 2002.)
Using the ICF model, we could describe an older woman, who has a history of osteoarthritis of the knees and hypertension, who presents to rehabilitation after a hip fracture. She lives alone in a second-floor apartment and has a daughter who lives at a distance 6 hours away. The patient has a large circle of friends and regularly attends social gatherings at the local senior center. Figure 55-3 demonstrates how this patient’s problems might be placed in the ICF model, with the goal of generating hypotheses about the best treatment options. Important issues include not only improving the patient’s strength and walking ability but also addressing where she will live after discharge and how to keep her active in her community. Understanding the relationships between the different components and addressing them is the key to a successful rehabilitation.
FIGURE 55-3. Using the ICF model to describe patient function. This figure demonstrates how a patient’s problems might be placed in the ICF model with the goal of generating hypotheses about the best treatment options. In the case of this patient with osteoarthritis and new hip fracture, the ICF model illustrates how addressing where the patient will live after discharge and how to keep the patient active in the community are as important as improving the patient’s strength and walking ability. Understanding the relationships between the different components and addressing them is the key to successful rehabilitation. (Data from WHO ICF model.)
It is believed by many that the ICF framework has the potential to provide a standard disablement language, which could facilitate dialogue across disciplines. The ICF model attempts to reflect the interactions between different components of health and avoids the linear view of previous models. This framework also looks beyond disease and mortality to focus on how people live with their disabling conditions.
EVALUATION
Goals of Evaluation
An important goal of evaluation is to identify the cause of the disability for which rehabilitation is required. While there is frequently a final common pathway for many disabling conditions, the cause may impact on treatment and outcomes. For example, a person’s walking difficulty could be caused by osteoarthritis of the knee or a meniscal tear. For the patient with osteoarthritis, an exercise program focused on strengthening the musculature around the knee has been demonstrated to decrease pain and improve the ability to walk. For the patient with a meniscal tear, surgical intervention may be a better option. Evaluation prior to rehabilitation is also important to identify comorbidities that may directly or indirectly affect rehabilitation
outcomes. While the older person may have osteoarthritis causing limited walking ability, they may also have poor cardiac or pulmonary function that further limit walking ability. Another goal of evaluation is to determine the best site for rehabilitation to occur. Several settings are available, including an inpatient rehabilitation facility, a subacute nursing home, or a home. The appropriate setting is usually determined through evaluation of the disability and comorbid conditions that may affect rehabilitation. The next section outlines the evaluation process and focuses on creation of an individual treatment plan that addresses the patient’s unique disabling conditions.
Function-Oriented History and Physical Examination
During the initial evaluation, the history and physical examination can help characterize the disabling conditions and lead the clinician toward the most effective types of treatment. Determining if the functional decline occurred suddenly or has taken a more slowly progressive course may be very helpful in determining the cause of the disability. Symptoms associated with a given activity may also help narrow the cause to a specific organ system. For example, while the impairment may be difficulty in walking, the limitation could be caused by shortness of breath or pain with weight bearing.
Differentiating the causal pathway, in this case cardiopulmonary versus musculoskeletal, helps to refine the work-up required and assists the provider in targeting the appropriate therapy. Functional status and residence prior to the illness or injury may also help guide expectations of rehabilitation.
Table 55-1 lists several brief screening maneuvers, which can be done in the physician’s office when evaluating a patient for disability. These assessment tools can be used to quickly assess baseline functional status as well as monitor progress during rehabilitation. If these screening tests are positive, additional testing should be performed, as the screening tests are often not as accurate as more detailed maneuvers. A variety of standardized measures are available to further test function during basic and instrumental activities of daily living (IADLs).
TABLE 55-1 ■ PHYSICAL PERFORMANCE TESTS USED TO ASSESS FUNCTION
SCREENING
ACTIV'ITY
Put a heavy book on a11 overh ad shelf
Gra p:a pi�c o • pap r an,d reיjs·. its rו נno:val
Writ a se11t, 1ו e
Tin1ed 1·i e to, sta.11,ding fi
Sta11dimg balaתceי; feet i,de by side, se111iיta11נdem a11d ta11dem
L·ife space
assessme11tb·
Rhomb, 1'g; tand
·ng wi,th y
clos d and as s, swa.y 01· 1oss. of bala11ce
ATT,R:IBUTE
ADDRESSED
Upp I xtr nוity
tr 11gtl1an,d r:ange
ofmotion Pinch t1·ength
Fin mo,or
coo·dinaי·io11
Lower ex.t1·e1nity
s·tr,ength
Dynamic b,alance) predicts faUs, and 1norbidity a.11d 1nortal·it
Sta:tic balan e
Mobif.ity within the lוome andcommu nity in the weeks p io1·to as essm,ent
P1·oprloreptio11
FiUNיCTIONAL.
IMPL!JCATlוON
J\Jbility to p 1·fטrm
hot.1 ework
Grooming aננ,d
fו-eding .elf Feedi11g self
Aת1b:ula,ion a11d stair cli.1nb.ing
Aנוזוbula io11 a11d
ge11 1·al fu.nction
ihADיLs
B-alance, itlו pr,o, gress.i ely s.ו11aUer base of suppo,נ·t
Ad:dresses fac.tors iז:1 additio11 tס physical functiס11 that 1nlgh impai1· ffiOi ility
Abil"ty to balance withou, vi ual input
Ot1ו,1 fror11 ast1i,feזו,ski A, Jכerem , 1Vallace D, et al. P1ננisical perforנ·ווaווce »1.ea.suןיesitו lf1 cliזוirol'�etliזוg. } J\זזl , rilltז QC, 2Q.Q3;5 l (3):314; וtPi;uJ C, awye,,·B�וkcr R Rollו DL, et ,11. As.sessi11g 111obi/ity jןו olde1· (ld.ults:t/ו,e UA8 StrגJy of Agiזrg Life- p{זce Assess1ז1e1וt. Phys T/וe'r. 2005;85:1008-1019.
For example, lifting a heavy book overhead tests shoulder range of motion and strength. If a person is unable to achieve this task, additional range of motion and muscle testing should be done to isolate the cause of the difficulty.
Many of the screening tests listed have normative values and have been well validated for the geriatric population. The short physical performance battery includes three of the screening tests, gait speed, timed chair stands, and static balance with worse scores being associated with falls, nursing home placement, and mortality. The University of Alabama at Birmingham (UAB) Study of Aging Life-Space Assessment is a validated instrument that measures a person’s mobility in the home and community during the month preceding the assessment. Importantly, the Study of Aging Life-Space Assessment goes beyond measuring the individual’s ability to perform specific tasks by assessing the person’s actual pattern of mobility, which may help identify factors other than physical impairment, such as emotional or socioeconomic factors, which might be limiting mobility.
In addition to the history and physical examination, an evaluation should include assessments of cognition, motivation, depression, social support, and financial resources, as these factors can have a significant impact on rehabilitation outcomes. A variety of validated assessment tools can be used to screen for cognition and depression, such as the Montreal Cognitive Assessment (MoCA) and the Geriatric Depression Scale, respectively.
Assessment of current methods utilized by the patient for coping with disability, including use of ambulatory or assistive devices, level of assistance needed, and any limitation of activities, should also be explored.
Determining Rehabilitation Potential
Many factors influence the choice of who would benefit from rehabilitation and the success of those rehabilitation efforts. Assessment for rehabilitation potential needs to be done when the acute medical illness has resolved. A patient with a hip fracture and concurrent delirium may do poorly on initial assessment but, once the delirium clears, may progress nicely with rehabilitation. At times, the medical condition will need to be treated concurrently with rehabilitation efforts. After a prolonged intensive care unit (ICU) stay, a patient may have significant orthostatic hypotension, which will resolve as they regain the upright position during rehabilitation. Table 55-2
lists a variety of acute medical illnesses that might delay referral to rehabilitation until these are resolved.
TABLE 55-2 ■ FACTORS FOR WHICH REHABILITATION MAY NEED TO BE DELAYED UNTIL RESOLVED
Other determinants of rehabilitation benefit include motivation, cognition, and prior functional status. Comorbid illness may have a significant effect on rehabilitation efforts and may even cause a change in rehabilitation approaches. For example, a patient with chronic obstructive pulmonary disease (COPD) on home oxygen who falls and fractures a hip may not be able to tolerate more than 5 minutes of therapy at one time. The rehabilitation
approach might be changed to include frequent walks of short duration by therapy and nursing, as opposed to an hour-long therapy session twice a day. Table 55-3 lists a variety of factors that might influence either the use of rehabilitation or the goals of the rehabilitation. In some cases, like terminal illness with a short life expectancy, the goals of care may need to be addressed with the patient and family, and palliative care may be a more appropriate option. For other factors, like lack of motivation, well-defined patient-centered goals that are easily measured may help overcome this potential obstacle.
TABLE 55-3 ■ FACTORS THAT MAY INFLUENCE THE SUCCESS OF REHABILITATION INTERVENTIONS
After careful evaluation, including a function-oriented history and physical examination, assessment of factors that may impact on rehabilitation outcomes, and the development of goals with the team, including the patient and the family, we are ready for the final step prior to initiation of rehabilitation, choosing the site for rehabilitation. Determining the optimal
setting in which rehabilitation should occur is based on many of the factors previously evaluated, as well as patient preference.
COMPONENTS OF REHABILITATION
The Organization of Rehabilitation
Se ttings for care A variety of settings are available, both inpatient and community based, in which to receive rehabilitation services. It is important for the provider to understand the range of available settings and the advantages and disadvantages of each setting. While the provider may be responsible for helping match the patient to the optimal setting, insurance and cost also play a role. Rehabilitation services are available through Medicare Part A on a time-limited basis. Patients must demonstrate that they are making progress with rehabilitation goals in order to qualify for services.
Inpatient rehabilitation is offered in rehabilitation centers and Medicare- skilled nursing facilities. In order to qualify as a Medicare-certified inpatient rehabilitation hospital, a certain percentage of all admitted patients must have at least one of 13 conditions, which include diagnoses like stroke, burns, and neurologic disorders. Patients must be managed by an interdisciplinary team of skilled nurses and therapists, be seen daily by a physician, and require 24-hour rehabilitation nursing care. Rehabilitation is intensive with patients receiving a minimum of 3 hours of therapy daily. As the rehabilitation center offers 24-hour-a-day medical care, patients in need of close medical supervision during therapy can receive it. However, the patient must be able to tolerate the intensity of therapy provided, which may be difficult for the older patient.
Like the rehabilitation center, Medicare-approved skilled nursing facilities must provide 24-hour nursing care. While physicians must be available 24 hours a day, they supervise care and can visit the patients less frequently. Interdisciplinary care may not occur, although therapy services, dietary, pharmacy, and social services are available. There are no requirements for intensity or duration of therapy sessions, or any required case mix. This setting allows for a slower rehabilitation pace, which may be necessary for some older patients with multiple comorbid diseases. The availability of 24-hour nursing care is also a benefit for persons who are unable to care for themselves or who do not have caregivers at home.
Home health benefits for rehabilitation, including part-time nursing and therapy services, are also available through Medicare to patients who are defined as “homebound.” This includes patients for whom leaving the home is difficult or who require help of another person to get out of the home. A physician or allowed practitioner (advanced practice providers or clinical nurse specialists) must order Medicare home health services, certify a patient’s eligibility for the benefit, and a face-to-face encounter must occur within the 90 days prior to the start of home health care, or within the 30 days after the start of care. These services must be recertified every 60 days. While the intensity of the rehabilitation is less and the nursing services are part time, many patients prefer rehabilitating in their own home. If the patient has the necessary support system, this can be an excellent option.
While a number of studies have examined the effect of the rehabilitation setting on outcomes, the results remain unclear. For patients with hip fracture, the setting of care does not appear to have an impact on outcomes. After a stroke, patients who are treated in inpatient rehabilitation hospitals or special stroke units are more likely to be discharged to home and with improved function. Ultimately, factors such as patient prognosis, level of medical and nursing care needed, and intensity of therapy the patient can tolerate will help determine the optimal setting for rehabilitation.
Rehabilitation Team Members
An interdisciplinary team is often required to meet the complex rehabilitation needs of older patients. While team members have defined roles and functions, there is considerable overlap in the services provided. For example, while the physical therapist may focus on transfer and gait training, the occupational therapist may also encourage practice of transfers while performing self-care skills. In addition, there are different levels of education and licensure required for different providers. Table 55-4 outlines the types of rehabilitation team members and their usual roles, methods of evaluation, and treatment.
TABLE 55-4 ■ REHABILITATION TEAM MEMBERS, TYPICAL ROLES, AND METHODS USED FOR EVALUATION AND TREATMENT
Key members of this interdisciplinary team are the patient and the family. Indeed, the patient and family are at the center of the interprofessional team. An important component of chronic disease management is patient self- management, and patients must be active participants in the decision-making process. The team is responsible for establishing goals, in collaboration with the patient and the family, and developing a treatment plan to achieve those goals. In addition to teaching the patient, a key component of rehabilitation is
training the caregiver or the family. Specifically, caregivers must be taught how to assist with exercise programs, ambulation, and activities of daily living (ADLs). The caregiver may need to know how to use adaptive equipment or even how to transfer the patient safely, if the patient is not independent with this task. Communication among all team members, including the patient and their family, is critical for success.
Process of Care: Rehabilitation Interventions
A variety of interventions are available to treat physical impairments and disability. The selection of intervention strategy is determined by the results of the assessment. All interventions should either directly or indirectly lead to an improvement in activity and/or participation. Major categories of interventions include (1) exercise/physical activity; (2) modalities including thermal agents and electrotherapy; (3) adaptive aids such as walkers, canes, and devices to improve ADLs; and (4) orthotics (splints and braces) and prosthetics (artificial limbs).
Exercise/physical activity
General Principles of Exercise Exercise is the cornerstone of physical rehabilitation. Each exercise prescribed for a patient should be related to achievement of a goal, and all goals should lead to improvement in function. For example, a common exercise for patients is elbow and shoulder flexion. Typical goals related to these exercises are feeding one’s self, dressing one’s self, or reaching overhead to retrieve an item from a shelf. The expected functional outcome for each exercise should be shared with the patient and his/her family. The involvement of the patient and the family is important to maximize adherence.
In setting goals for exercise programs, it is important to consider any pathology that is present. If there is irreversible damage to the neuromuscular system, then the potential for improvement in muscle function is limited.
Amyotrophic lateral sclerosis is an example of a pathology in which improvement in muscle function is limited. However, in most cases, decline in muscle strength results from a combination of pathology and deconditioning occurring secondary to inactivity. The deconditioning component is reversible, and therefore some improvement in muscle strength is possible.
A common question often asked is whether exercise is safe for older adults. The number of adverse events reported as a result of exercise in
adults is relatively low, with adverse events being more common with vigorous exertion and in persons with atherosclerotic heart disease. To facilitate a safe response, close monitoring before exercise, during exercise, immediately after exercise, and 24 hours after exercise is important. In addition, prior to the initiation of an exercise program, it is important for medical conditions such as congestive heart failure and diabetes to be stable and under optimal medical management. Exercise should be supervised initially to assure appropriate changes in heart rate and blood pressure.
There is an expectation that minor muscle soreness will occur with most types of exercise, and patients should be counseled to expect some discomfort. Some patients with osteoarthritis may actually experience a decrease in joint discomfort with exercise. Because increased activity typically has a positive effect on insulin resistance, patients with diabetes may experience a need for less medication to control blood glucose levels.
For exercise to cause a change in physical function, the body must be stressed greater than the usual stress of everyday life. Therefore, it is important that patients are challenged in their exercise programs. For example, when a patient achieves a specific goal, such as walking 30 m at
0.5 m/s, the goal needs to be increased to a greater distance and/or speed. In comparison to younger adults, older adults need a longer recovery time both during an exercise session (between bouts) and between sessions. Improvement may also take longer for older versus younger adults, which should be considered when setting time frames for the achievement of goals.
There are many classification systems for types of exercise. In the following discussion, we will describe exercise types based on the anticipated outcome, including exercise to increase muscle strength, aerobic capacity, balance, flexibility, and motor control.
Exercise to Increase Muscle Strength A typical exercise program to increase muscle strength involves movements performed against resistance. The resistance can be weights, rubber tubing or bands, or the person’s own body weight. To achieve optimal results, the resistance should be 60% to 80% of the person’s maximal lifting ability. For the majority of older adults who are starting an exercise program, it is wise to begin at a lower level (~ 40%–50% of maximal capacity) until the person has mastered the movement patterns.
Bone density can be positively affected by exercises designed to increase muscle strength. Prior to initiating a program to increase muscle strength, it is important to know the individual’s bone status, or the extent and location of
osteoporosis and/or osteopenia. In persons with osteoporosis, the amount of resistance may need to be modified to avoid overstressing bones and causing a fracture.
Exercise to Increase Aerobic Capacity Exercise programs designed to increase aerobic capacity involve continuous activity (such as walking, cycling, and stair stepping), usually performed for at least 20 minutes at an intensity that is 50% to 80% of one’s maximal oxygen consumption. Older adults who are beginning a program may need to start with 5 to 10 minutes of continuous activity at a lower intensity. In fact, several sessions of shorter duration (ie, 10 minutes) per day have shown similar benefits as one session of 30 minutes. To achieve benefits, endurance training needs to be performed three to five times per week. For older persons in rehabilitation programs who have physical limitations, the specific activities may need to be modified.
For example, cycling may not be possible for a person who has significant hemiparesis after a stroke. Adaptations, such as securing the person’s foot to the pedal, may allow the individual to exercise. Moving to another venue, such as a therapeutic pool, may be another option.
A benefit of aerobic capacity training is an increase in maximal oxygen consumption, which is defined as the amount of oxygen consumed while performing the maximal workload that one can perform for 2 to 3 minutes. Improving the maximal amount of work that a person can perform is not necessarily a direct benefit because daily activities are performed at submaximal, rather than maximal, levels of exertion. However, with an increase in maximal oxygen consumption, a given amount of submaximal work is performed at a lower percent of maximal oxygen consumption.
Consequently, as a result of aerobic capacity training, submaximal work expressed as a percentage of maximal oxygen consumption is lower. From a functional perspective, activities such as dressing, bathing, and performing housework can be performed with less fatigue and for longer time periods.
Aerobic exercise has positive benefits for persons with hypertension, hypercholesterolemia, and obesity. Regular aerobic exercise has been shown to decrease resting blood pressure, resulting in either a decrease in the amount of or a need for medications. Although aerobic exercise often does not decrease total cholesterol, many studies have shown positive benefits for high-density lipoprotein cholesterol and triglyceride levels. For both treatment and prevention of obesity, aerobic exercise is a key component of a successful program.
Aerobic exercise also has benefits for cardiovascular and pulmonary diseases. For persons with angina, participation in a regular endurance exercise program produces an increase in the “anginal threshold,” allowing persons to exercise at higher levels prior to the onset of angina. Persons with chronic pulmonary disease typically experience less breathlessness with activities, as a result of a regular exercise program. One of the most effective treatments for claudication is walking to the onset of pain, which ultimately produces an increase in the distance walked prior to the onset of claudication.
Exercise to Improve Balance Balance, defined as the ability to remain upright as the body’s center of gravity shifts relative to the base of support, declines as people age. The somatosensory, visual, and vestibular systems contribute to our ability to maintain balance. Balance is an essential component to walking because walking involves a continuous shift of the body’s center of gravity relative to the base of support.
Balance exercises are important for persons who have fallen or who are at high risk of falls. Exercises are prescribed that are appropriate for the individual’s current level of function. For example, low-level exercises may involve standing and shifting one’s center of gravity. As the person progresses, he/she may practice standing on one extremity or walking on a balance beam. Exercises performed with eyes closed force the use of the vestibular and somatosensory systems. Exercises performed on altered surfaces, such as foam or carpet, force the use of the visual and vestibular systems. For persons with a specific abnormality of the vestibular system, such as benign positional vertigo, there are specific exercises that often are helpful.
Flexibility Exercises The goal of flexibility exercises is to increase range of motion of a body part by increasing muscle length and/or joint motion. The most effective and safe approach is a prolonged, low-intensity stretch of the muscle/joint with limited motion. Having a sufficient amount of motion is important for many functional activities. For example, going up and down stairs using a reciprocal gait pattern requires at least 90 degrees of knee flexion. Reaching overhead requires 120 to 150 degrees of shoulder flexion and abduction. In addition, sufficient range of motion at the hip and ankle is important for maintaining standing balance. Excessive range of motion, however, decreases stability and can lead to pain, decreased mobility, and falls.
Parkinson disease is an example of a common condition in older adults in which flexibility exercises are important. Persons with Parkinson disease assume a flexed posture, resulting in loss of cervical and thoracic extension. Prescribing exercises early in the course of the disease may prevent the severity of the postural abnormalities. Maintaining spinal extension is important to prevent compromise of respiratory capacity and to minimize gait and balance disorders.
Task-Oriented Exercises to Improve Motor Control Older persons may lose their ability to perform tasks because of abnormal tone (spasticity) and/or muscle weakness. Task-oriented exercises are used to improve coordination and motor performance. Constraint-induced movement therapy (CIMT) is one method used to improve motor function. With CIMT, the stronger, less affected extremity is “constrained” using a cast or mitt. The person is forced to use the affected extremity and practice tasks needed for daily living. This method typically involves participation in therapy for 4 to 6 hours per day.
The approach is based on learning theory and assumes plasticity of the central nervous system with reforming of neural connections after injury. A Cochrane review reported greater improvements in upper extremity motor function in persons poststroke with CIMT compared to conventional treatment. These differences were not significant at 6 months.
A second method used to improve walking performance and balance is treadmill training with partial body support. A harness is connected to an overhead system, which is mounted on a treadmill. By providing partial weight support in combination with a speed-controlled treadmill, patients can perform a greater amount of task-specific practice compared to traditional methods of gait training. A Cochrane review of treadmill training with and without body weight support after stroke concluded that persons who are able to walk appear to benefit the most from this intervention.
Potential benefits include an increase in walking speed and walking endurance.
Designing Exercise Prescriptions Exercise should be prescribed in an individualized manner similar to prescriptions for medications. The components of an exercise prescription include mode (type of activity), frequency (number of times per day or week), intensity (level of exertion), and duration (length of time of each individual session). Sufficient research is not available to determine the ideal exercise prescription for older adults with the variety of comorbidities that are typically present. However, for
some types of exercise, such as moderate- to high-intensity strength training, there is evidence that exercising every other day or every third day is optimal because of the need for recovery time. Because specific guidelines for persons with pathology are not available, pre- and postexercise monitoring is essential. Vital signs and blood glucose levels (if diabetes is present) need to be checked prior to and after exercise. Persons who are beginning an exercise program or increasing their current exercise level need to be asked how they are feeling 24 hours after exercise. Severe muscle soreness and general body fatigue are signs that the exercise stress was excessive.
Physical modalitie s Physical modalities that are often used with older adults include heat and cold agents, aquatic or pool therapy, electrotherapy, and phototherapy (monochromatic infrared energy—MIRE). For most of these agents, there is limited evidence of effectiveness as a sole intervention.
However, when used in combination with other therapies, especially exercise therapy, these modalities can enhance the effectiveness of the overall intervention. The most common indication for use of physical modalities is pain management. Because older adults may have reduced mental status, or impaired circulation and sensation, using modalities in older adults requires caution. Educating patients and their families on the rationale for use of modalities is essential, especially if these are to be used in the home environment.
Thermal Agents (Including Aquatic Therapy) Thermal agents include superficial and deep heating modalities and cryotherapy. Physiologic effects of heat include increased blood flow and edema, increased extensibility of connective tissue structures, and decreased pain. Superficial modalities primarily increase the temperature of the skin and underlying subcutaneous tissue and include hot packs, heating pads, and paraffin. Superficial heat is beneficial for persons with osteoarthritis, rheumatoid arthritis, and conditions resulting in cervical and low-back pain. The most popular deep heating modality is ultrasound, a form of acoustic energy, which when absorbed by tissues is converted into heat. Ultrasound is often used for tissue contractures, tendonitis, and pain resulting from musculoskeletal disorders. Ultrasound should not be administered close to the brain, eyes, reproductive organs, pacemakers, or arthroplasties.
Cryotherapy includes cold packs, ice massage, cold water immersion, and vapocoolant sprays. Physiologic effects of cold include cutaneous vasoconstriction, decreased nerve conduction velocity, decreased spasticity,
and increased joint stiffness. Cold therapy provides short-term analgesia and often allows patients to move when movement otherwise would be too painful. Cold therapy should be avoided in persons with arterial insufficiency, impaired sensation, cold hypersensitivity, and Raynaud disease.
Aquatic or pool therapy is an alternative to physical modalities and/or exercise on land. A therapeutic pool provides a combination of heat and buoyancy for support of upright activities. Patients with muscle weakness and pain often can walk and exercise in a therapeutic pool when movement on land is limited. Because total body heating produces significant vasodilation, patients with cardiac insufficiency may experience chest pain. All patients need to be careful exiting the pool because of the possibility of postural hypotension. Persons with heat intolerance, such as those with multiple sclerosis, may require a “cool” pool with temperatures below 90°F.
Electrotherapy Electrotherapy can be used as an intervention for pain management, muscle activation, wound healing, and urinary incontinence. Transcutaneous electrical nerve stimulation (TENS), a popular treatment for pain, involves placing electrodes over peripheral nerves, nerve roots, or painful areas. The mechanism of action is unclear but probably involves the release of endogenous endorphins in the cerebrospinal fluid, which block pain by binding to opiate receptors. Adverse effects of TENS are minor and typically involve skin irritation secondary to sensitivity to the electrodes or gel. Contraindications to the use of TENS include persons with impaired sensation and/or cognition and persons with either a pacemaker or an implanted cardiac defibrillator. A recent Cochrane review was unable to conclude that, in people with chronic pain, TENS is beneficial for pain control, disability, health-related quality of life, or use of pain relieving medicines.
Neuromuscular stimulation is used to activate muscles addressing treatment goals in persons after stroke, spinal injury, or knee surgery. For persons poststroke, electrical stimulation can be used to enhance functional movement patterns, such as contraction of the anterior tibialis muscle to prevent foot drop during the swing phase of gait or contraction of the quadriceps at heel strike during gait. After knee surgery or injury, persons often “forget” how to contract the quadriceps muscle as a result of pain and/or joint effusion. Neuromuscular simulation can be used to “retrain” the quadriceps muscle counteracting atrophy that occurs with immobilization.
An important difference between normal muscle contraction and electrically induced muscle contraction is the order of recruitment of motor units. With normal muscle contraction, the smaller fatigue-resistant motor units are recruited first, followed by larger, fatigable motor units. With electrical stimulation, the larger-diameter fatigable fibers are recruited first. The consequence is that fatigue occurs fairly quickly with neuromuscular stimulation. Providing adequate rest periods and limiting the duration and frequency of contractions are strategies to lessen fatigue.
Electrotherapy used for wound healing involves application of electric current either directly to a wound or to the skin surrounding the wound.
Electrical stimulation is approved for Medicare coverage for the treatment of stasis, arterial pressure, and diabetic ulcers that have not responded to conventional therapy. Animal and human studies have shown that electrical stimulation increases both DNA and collagen synthesis; directs epithelial, fibroblast, and endothelial cell migration into wound sites; inhibits the growth of some wound pathogens; and increases the strength of scar tissue.
The effectiveness of electrical stimulation is demonstrated in randomized controlled trials, with the strongest evidence being demonstrated in the treatment of pressure ulcers.
Phototherapy (MIRE) Devices that deliver MIRE were approved by the Food and Drug Administration (FDA) in 1994 to increase circulation and decrease pain. This treatment involves placing pads over the lower leg and foot. The pads contain diodes that emit light energy in the near-infrared spectrum (890- nm wave length). The typical treatment protocol involves 30-minute treatments, three times a week, for a total of 12 treatments. The infrared photo energy is thought to release nitric oxide from hemoglobin. Nitric oxide relaxes smooth muscle cells, dilating blood vessels and improving circulation. MIRE has been used for the treatment of peripheral neuropathy, with the goal to improve sensation and decrease neuropathic pain. Research studies are conflicting; however, the majority of them have shown that MIRE was no more effective than placebo in improving sensation.
Adaptive aids Adaptive aids include devices that allow persons with physical limitations to participate in activities, such as basic and IADLs, with greater ease and/or less pain. Categories of adaptive aids include mobility aids to assist people to move around within their home and community, bathroom aids to assist with bathing and toileting, and self-care aids that assist with dressing, personal hygiene, cooking, and other activities. It is not unusual for
older persons to have devices that they do not need, often as a gift from a friend or relative. The opposite also occurs when older persons need devices that are not prescribed. The devices described below need to be prescribed by a professional, and the patient or family or caregiver needs to be instructed in their use. Devices that are used incorrectly can lead to falls and other adverse events. It can be helpful for the professional (typically a physical or occupational therapist) to make a home visit to assess whether the prescribed adaptive devices can be used safely in the patient’s home.
Mobility Aids Canes are the most popular mobility aid for older adults because they are lightweight and easy to use when space is limited. Canes are used to decrease weight bearing (and pain) in an extremity with an arthritic joint and to improve balance by increasing the base of support. When adjusted to the proper height, the handle of the cane is at the level of the wrist when the arm is fully extended. Canes should be used in the hand on the side opposite of the involved extremity. Many people will incorrectly use the cane in the hand on the side of the involved extremity. The cane then acts as a brace for the involved extremity, producing an abnormal gait pattern and limiting range of motion of both the hip and the knee of the involved side. To achieve a normal gait pattern, patients are instructed to hold the cane in the hand opposite the involved extremity and advance the cane and the involved extremity simultaneously. Patients then swing through with the uninvolved extremity while bearing weight on the cane and, to a lesser degree, the involved extremity. For stairs, patients are taught to go “up with the good and down with the bad.” To ascend the stairs, the uninvolved extremity is advanced up the stairs first, while the involved extremity and the cane remain on the lower step. To descend the stairs, the involved extremity and the cane are lowered first and then the uninvolved extremity descends to the same step. For persons with decreased sensation in the lower extremities, a cane can also provide proprioceptive input to the brain by transmitting information from intact proprioceptors in the hand.
Two major types of canes are straight and quad canes. Straight canes are usually made of aluminum or wood, with a variety of handles available.
Quad canes are aluminum canes with a four-legged base. One advantage of a quad cane is that the cane does not fall if the person releases the handle. A disadvantage of some quad canes is that their base is too large to place on stairs, making stair climbing difficult.
Crutches are usually not used with older adults because a higher level of coordination and skill is required. The two major types of crutches are axillary and forearm crutches. If axillary crutches are used incorrectly, shoulder injury and/or axillary nerve damage can occur. Forearm crutches are more functional because a cuff secures the crutch on the patient’s arm allowing use of the hand to manipulate objects. Crutches are usually used to provide bilateral support. However, a single crutch can be used instead of a cane if additional unilateral support is needed.
A walker usually is prescribed when a cane does not provide sufficient support. Walkers provide bilateral support and are easier to use than crutches. Walkers should be adjusted so that the user maintains an erect posture and is not required to lean forward to reach the walker. There are several types of walkers that vary in stability and function. The standard four-point or “pick-up” walker requires that the person pick it up with each step, requiring arm strength and endurance, and producing a slow walking
speed. With a two-wheeled rolling walker, the person can use a more normal gait pattern and speed. Having two rather than three or four wheels provides more stability. A four-wheeled walker, called a “rollator,” has hand brakes so that it can be locked when the user is standing up and sitting down. This type also has a platform seat for resting and a basket for carrying objects.
The rollator requires greater skill because of the use of the hand brakes. It is preferred for outdoor use because the wheels are larger and move easier over sidewalks and slightly rough terrain. A final option is the Merry Walker, which provides the maximal amount of support. This type of walker includes front, side, and back bars, and a seat for resting. Merry Walkers are larger and more difficult to manipulate in homes. They are often used in institutional settings for persons with severe balance and coordination deficits.
A wheelchair should be prescribed for a person who can no longer walk safely or when walking endurance is low. The wheelchair allows the person to continue to do activities such as shopping that require extended periods of standing and walking. Quality of life is maintained and social isolation is avoided. Two main types of wheelchairs are manual and power chairs. There are many options available for customizing a chair for individual needs. In considering the optimal chair, both stability and mobility need to be considered. For example, a back height that is too high makes propelling the chair independently difficult thus impairing mobility, whereas a back height that is too low may not provide adequate trunk support. There are a variety of
manual wheelchairs available with many different features. The width of the seat can range from narrow to wide to accommodate larger persons.
Removable arm rests and foot rests are available and make transfers easier and safer. Fixed foot rests are not recommended because they can contribute to falls. Consultation with the occupational therapist or physical therapist is recommended, as the therapist would know best how to order appropriate parts to maximize function.
Manual wheelchairs are lighter in weight than power chairs and are fairly easy to fold and load into a car for travel. Power chairs and scooters provide enhanced mobility outdoors and in the community. Most power chairs are difficult to maneuver in homes. In addition, a car carrier is needed for travel.
Prescribing a wheelchair for use is an important decision, and the advantages and disadvantages for each patient need to be considered. For patients who are able to walk, having a wheelchair may discourage walking, leading to decreased muscle strength and endurance and increasing the probability of falling. However, not prescribing a wheelchair as walking ability declines can negatively affect quality of life. Patients and families need to be counseled on appropriate use of a wheelchair based on their unique needs.
Table 55-5 illustrates some of the commonly prescribed types of canes, walkers, and power chairs and describes some of the benefits and drawbacks of using the different mobility aids as well as clinical situations in which the device might be useful.
TABLE 55-5 ■ KEY FEATURES AND CLINICAL SITUATIONS WHERE AMBULATORY DEVICES MIGHT BE BENEFICIAL
Bathroom and Self-Care Aids For many older adults with physical disabilities, the bathroom is a challenging and unsafe place. Devices are available for use in a typical home bathroom to make activities easier and safer. Grab bars located close to the toilet and shower or tub should be considered for all older adults. Many older adults have difficulty rising from a regular toilet seat because of the low height. A raised toilet seat can be secured to a regular toilet. For persons who need more assistance, bedside commode chairs are available. Some bedside commode chairs have wheels, and can be
rolled over a regular toilet. Tub benches are available for individuals who have difficulty getting out of a regular tub, and shower chairs are available for persons who cannot stand independently to take a shower.
Devices are also available to assist patients with ADLs. Occupational therapists can assist with identifying the most appropriate adaptive aids.
Some examples for dressing include aids to assist with manipulating buttons, securing pants, and putting on/off shoes and socks. For eating, enlarged handles on utensils and modified plates are available. Having an appropriate aid often results in independence in performing a task versus needing to ask for assistance.
Electronic Devices (Environmental Control Units/Augmentative Communication Aids) Devices are available that use more sophisticated technologies to allow patients a greater degree of independence and enhanced communication. Environmental control units typically are used for patients with severe disabilities to allow turning on/off lights and controlling other electronic devices. Whatever voluntary motion is available is used to control the unit, usually through a joystick, mouth stick, or eye motion. Devices used to enhance communication include communication boards, voice amplifiers, and telephone adaptations. For strategies to enhance communication, consultation with a speech and language pathologist is recommended.
Orthotics and prosthe tics Orthotics and prosthetics are external devices used to enhance function. Orthoses typically are used to either restrict or assist motion and are named according to the joints or body parts that are affected. For example, ankle-foot orthoses (AFOs) include the foot and ankle, and knee-ankle-foot orthoses extend from the thigh to the foot.
The most commonly used orthoses for older adults are foot orthoses and AFOs. Foot orthoses include shoe inserts and other devices placed inside the shoe. Inserts can be used to relieve pain or to protect insensitive feet.
Examples of commonly used foot orthoses include heel spur cushions, scaphoid pads to correct flattening of the arches, and metatarsal pads to transfer weight from the metatarsal heads to the metatarsal shafts. AFOs are used for patients with either weakness or paresis of dorsiflexors to prevent foot drop during gait. These orthoses can be made of plastic or metal. Plastic orthoses can be interchanged between shoes but do not provide as much support as metal orthoses. Appropriate use of foot and ankle orthoses can improve function by providing a safe, more comfortable gait.
For individuals with diabetes and severe diabetic foot disease, Medicare will cover one pair of therapeutic shoes and inserts or shoe modifications each year. Physicians must provide a pedorthist or podiatrist with certification that the person has diabetes.
Many older adults require prostheses because of amputation resulting from vascular disease. In prescribing prostheses for older adults, important considerations include ease of donning and doffing, stability during activities, and overall function. In persons with severe dementia or advanced cardiopulmonary disease, wearing a prosthesis may not be practical. Persons who are not independent in basic ADLs skills, such as transferring and dressing, typically are not good candidates for prosthetic use. To achieve the optimal outcome, the patient should be involved in a preprosthetic training program to improve strength and endurance. The physician should work closely with the physical therapist and prosthetist to assure that the prosthesis is evaluated for correct fit and that the patient receives training on use of the prosthesis.
SPECIFIC CONDITIONS TREATED WITH REHABILITATION
Pulmonary Rehabilitation
Patients with chronic respiratory diseases frequently experience disability owing to decreased exercise tolerance and symptoms like dyspnea and anxiety. The established benefits of pulmonary rehabilitation (PR) include improved exercise capacity, a decreased sensation of dyspnea, and overall improvement in quality of life. The cornerstone of most PR programs is exercise, although other components may include education and behavioral modifications like energy-conservation techniques. As with other rehabilitation programs, PR can occur in inpatient, outpatient, and home settings with equal success. Exercise programs are individually tailored to meet the patients’ needs, and patients are usually encouraged to exercise three times a week or more to a level where they experience moderate dyspnea. Training regimens vary, depending on the goal of the rehabilitation. For example, many patients with COPD experience dyspnea with upper body activity, like bathing or grooming. An exercise program that targets strengthening of the arms with elastic bands or light weights helps decrease the dyspnea and work effort required for these tasks. Treadmill or track
training will improve walking endurance but not strength, so a variety of training exercises are usually utilized.
A Cochrane review concluded that PR showed both statistical and clinical improvements in dyspnea and disease-specific quality-of-life- measures and current guidelines recommend PR for all patients with COPD who experience ongoing symptoms despite optimal pharmacologic therapy. Supervised programs offered greater benefit than unsupervised ones, and patients with severe disease appeared to benefit the most when compared to those with mild-to-moderate COPD. However, many questions still remain about what components are essential and how to best assess outcomes after rehabilitation.
Cardiac Rehabilitation
Cardiac rehabilitation (CR) is increasingly recognized as an important component of an interdisciplinary treatment strategy for patients with a history of myocardial infarction (MI) and stable angina and patients after coronary artery bypass graft (CABG) surgery. Many of the benefits of CR occur through the exercise component of these programs. Exercise training has been found to decrease coagulability, increase fibrinolysis, improve endothelial function by moderating inflammation, and improve endothelium- dependent vasodilation. These beneficial effects can be demonstrated by the reduction in C-reactive protein seen with exercise and the improvement in hyperemic myocardial flow after CR. Disability or functional decline is often associated with a variety of cardiac conditions, and participation in CR can improve fitness and reduce the signs and symptoms of exercise intolerance.
This can lead to improved functional independence. In addition to exercise training, CR provides a structured environment for risk factor management through patient monitoring plus support of compliance and adherence.
However, despite evidence of the beneficial effects, CR remains underutilized, with approximately one-third of eligible patients receiving CR.
In low-risk individuals with heart failure and after MI or stenting, exercise-based CR has been found to be safe with no increase in short-term mortality and effective with demonstrated reductions in risk of hospital admission and improvements in patient’s health-related quality of life compared with control. Studies have also demonstrated equal effectiveness of home-based and center-based programs in improving outcomes of
exercise-based CR. In a study of more than 600,000 Medicare patients hospitalized for acute coronary syndrome, percutaneous coronary intervention, or CABG, the 12% who participated in CR had a lower rate of mortality compared to nonparticipants (2.3% vs 5.3%). This benefit was sustained at 5 years with a mortality rate of 16% for participants versus 25% for nonparticipants. In addition, there was a dose-response relationship with participants who attended 25 or more sessions having a 20% lower 5-year mortality rate compared to those who attended fewer than 25 sessions.
For patients with congestive heart failure, randomized clinical trials conducted during the last decade have demonstrated that CR can improve exercise tolerance, quality of life, and disease-related symptoms, without adversely affecting left ventricular function. While there is no consensus regarding the optimal exercise program, guidelines support the use of regular aerobic and/or strengthening exercises. Exercise training induces peripheral and central adaptations, including improvement in vasodilation among active muscle, decreased sympathetic nervous system activation, and increases in peak cardiac output, heart rate, and stroke volume. These adaptations lead to a reduction in the commonly observed exercise intolerance because of fatigue and dyspnea. Using peak oxygen consumption (VO2) to measure
exercise capacity, improvements have ranged from 15% to 30%. Other common symptoms associated with heart failure such as shortness of breath, ability to perform ADLs, anxiety, depression, and general well-being have all been improved with CR. The magnitude of improvement has ranged from 15% to 50% for each of these variables, and improvements in quality of life can be seen as early as 2 months after initiation of the exercise program. In addition to improvements in symptoms and quality of life, reduction in mortality has also been demonstrated. Increased sympathetic nervous system activity and higher plasma and tissue cytokine concentrations are associated with worsening disease and poorer prognosis in patients with heart failure. Exercise training causes a downregulation of these systems, with a resultant 28% reduction in total mortality and hospitalization and a 29% reduction in death rate. Prior to initiation of an exercise program, patients must be clinically stable with controlled fluid status for at least 3 to 4 weeks.
The literature demonstrates a beneficial effect of CR for a variety of cardiac diagnoses, including acute MI, congestive heart failure, and after CABG surgery. Exercise training positively affects both the basic pathophysiology of coronary artery disease and the underlying disease
process. This, in turn, minimizes the impact of disability, improves quality of life, and reduces mortality. Referral to CR increases the likelihood of participation and long-term compliance, which can have significant beneficial effects for the patient.
Peripheral Arterial Disease
Studies have demonstrated that the optimal exercise rehabilitation program for improving the distance walked prior to the onset of claudication uses intermittent walking to the onset of pain. The increased distance achieved occurs through improvements in cardiopulmonary function, peripheral circulation, and walking economy. Improvements require an exercise program of at least 6 months duration to be effective. Treadmill walking has been shown to be more effective than strength training in achieving these results.
Amputation
During the initial postoperative period, goals include relieving pain, preventing medical complications, and preventing mobility problems, particularly muscle atrophy and contractures. Between 60% and 80% of persons undergoing a lower extremity amputation experience phantom limb pain. Approximately 10% of those who experience phantom pain rate the pain as severe enough to be disabling. Conflicting evidence exists regarding the success of adequate pain control preoperatively or perioperatively on the incidence of phantom pain. However, there is little controversy regarding the importance of adequate pain control for persons undergoing amputation.
Common medical problems after amputation include poor wound healing, skin breakdown, and falls. Early mobilization with or without a prosthesis is critical for recovery of function. Older persons who already have decreased lean muscle mass are at risk of additional muscle atrophy. Joint contractures occur when patients spend significant periods of time sitting in a chair or a wheelchair and can have a negative impact on their ability to ambulate with prosthesis. Initially after amputation, older persons are instructed in the basics of self-care and mobility and then discharged to home. Rehabilitation occurs later after healing of the amputation site and can occur in a rehabilitation facility, in the home, or as an outpatient.
For the older adult, comorbid conditions and premorbid functional status impact the success of the surgery and subsequent rehabilitation. Prior to
surgery, the older person should be medically stable, with special attention being paid to cardiopulmonary status. Level of amputation should also be considered. The energy expenditure required for a transtibial amputee is much less than for a transfemoral amputee and may predict a person’s ability to regain ambulatory ability. However, while preserving the knee joint may be beneficial, preoperative evaluation should carefully assess the risk and benefit of knee preservation to avoid surgical revisions and longer lengths of stay in the hospital.
Prosthesis use can decrease energy expenditure with transfers and ambulation and should at least be considered, irrespective of age. While firm criteria for prosthetic prescription do not exist, Medicare has developed a guide for selection of prosthetic knee and ankle components. Indications for prosthesis are based on the likelihood that a person will reach or maintain a defined functional state within a reasonable time and that the person is motivated to ambulate. For example, an individual with the potential to be ambulatory at a household level may qualify for a lower performance prosthetic knee than someone with the potential to be active in the community and/or pursue athletic activities.
Considerations in determining the potential functional status of someone after an amputation may include prior ability to walk, medical status as it relates to the person’s ability to meet the physiologic demands associated with prosthetic use, and ability to learn new skills. These measures would certainly be useful when developing goals for the patient after amputation. A variety of lower limb prosthetic devices are available. To date, studies have not included a large enough sample of older persons to determine the optimal socket or foot design for this population. This determination should be made with the input of the patient, physician, prosthetist, therapists, and insurance company.
Stroke Rehabilitation
Rehabilitation after stroke can begin as early as 24 to 48 hours poststroke once the patient is medically stable and focuses on return to previous mobility and self-care activities, as well as the prevention of medical complications like pressure ulcers or deep vein thrombosis, and minimization of spasticity. Provision of emotional support to the patient and family is essential. There appears to be a statistically significant and clinically important benefit from organized inpatient interdisciplinary
rehabilitation in the postacute period. In several randomized controlled trials, either organized inpatient interdisciplinary rehabilitation or stroke unit care demonstrated improved outcomes, including reduced odds of death.
Guidelines for the management of stroke were developed by Veterans Affairs and the Department of Defense and rate the quality of available evidence.
The guidelines present algorithms for initial assessment as well as management after rehabilitation referral.
Techniques used during stroke rehabilitation vary and are tailored to the needs and deficits of an individual patient. Strengthening, facilitation techniques that progress movement and task-oriented approaches, like constraint-induced therapy, are common. The goal of therapy, no matter the approach, is improvement of function and quality of life.
After a stroke, patients may have a variety of impairments in addition to muscle weakness. Dysphagia places patients at risk of aspiration and can be silent in up to one-third of patients with dysphagia. Communication disorders including aphasia also occur in one-third of patients, with prognosis being worse for patients of advanced age, or with delayed treatment. During the first month after a stroke, the incidence of bladder incontinence is 50% to 70%, although it returns to levels seen in the general population by 6 months. Treatment can include timed voiding schedules and monitoring of postvoid residuals. Hemiplegic shoulder pain is also a common occurrence, affecting 34% to 84% of patients poststroke. Major risk factors include advanced age and changes in muscle tone, which occur after the stroke. Treatment involves proper positioning to avoid joint subluxation and early range-of-motion exercises to prevent contractures and spasticity. There is some evidence that electric stimulation can improve hemiplegic shoulder pain for up to 6 months after treatment. Depression is another frequent complication after stroke, one that can have a significant negative effect on rehabilitation. Incidence ranges from 15% to 70% depending on the study, and several antidepressant medications have been associated with improvement. An organized, interdisciplinary team approach helps to address these common sequelae after stroke.
Parkinsonism
At this time, there are no guidelines or generally accepted rehabilitation techniques for persons with Parkinson disease. Traditionally, therapy has focused on improving posture, range of motion, exercise capacity, and gait.
There is evidence that exercise programs including resistive and flexibility exercises can improve physical function. One review demonstrated improvements in gait speed, balance and freezing episodes with exercise. Another commonly used technique is an external cueing strategy where auditory pacing, use of a walking stick, or visual cues can help improve gait and decrease episodes of freezing for some patients. Because of the success of rhythmic cueing, studies are being done to explore the use of treadmill training as a method to improve the gait pattern of patients with parkinsonism. Results from a systematic review are promising, with improvements in gait speed, stride length, and walking distance being seen. The “Training Big” program is being used increasingly frequently to improve motor performance through the use of repetitive high-amplitude movements. Originally used to amplify voice, studies are reporting significant improvements in gait speed and other functional measures. The long-term benefits of all the exercise interventions are still unknown.
Osteoarthritis and Total Joint Replacement
Several randomized trials have demonstrated the efficacy of strengthening exercises to lessen pain and improve function. Weight loss combined with strengthening exercises was shown to be more effective than weight loss alone for improving function and reducing pain. If pain occurs with an exercise, that exercise should be avoided, and monitoring by a physical therapist during the initial rehabilitation period is reasonable. Osteoarthritis of the hip is less amenable to exercise, probably because improving strength in the ball and socket hip joint does not provide the same support as strengthening the hinged knee joint. Efforts to relieve pain will help the person with arthritis maintain physical activity levels and minimize the effects of deconditioning and muscle weakness.
Total joint arthroplasty is the most common elective surgical procedure done in the United States. The primary indications for arthroplasty are mobility limitation and progressive pain, despite conservative treatments like exercise and use of mobility aids. After total joint replacement, the principal goal is to attain the highest level of functional independence possible.
Rehabilitation after a total hip replacement focuses on strengthening exercises and gait training. Precautions to reduce the risk of hip dislocation may be required during the initial months of post hip replacement. For example, in the early stages of recovery from the traditional posterolateral
approach, patients may need to avoid crossing their legs, flexing their hips more than 90 degrees and rolling their legs in and out to decrease the risk of hip dislocation. Raised toilet seats are also recommended to prevent excessive hip flexion during the first few months after surgery. After total knee replacement, rehabilitation is focused on pain control, reduction of swelling, improving range of motion, and strengthening the muscles around the knee. Recovery from total knee replacement requires the patient work hard to attain and maintain range of motion during the first few months after surgery, which is distinct from hip replacement surgery.
Hip Fracture
The initial rehabilitation efforts are focused on early mobilization to prevent complications of bed rest, like deconditioning and deep vein thrombosis.
Post repair, decreased weight bearing on the fractured limb is standard and patients are taught to walk with an appropriate assistive device. The amount of weight that can be placed on the repaired limb depends on fracture stability. If possible, patients should be allowed to bear weight as tolerated as opposed to “touch-down” weight bearing, which is often difficult for older patients to achieve. There is mounting evidence that exercise following hip fracture is beneficial with higher-intensity/duration programs showing more promising outcomes, although the optimal exercise program to maximize function after hip fracture has not been determined.
Sarcopenia and Deconditioning
While no consensus exists regarding the optimal training program, numerous studies have demonstrated resistance exercises to be very beneficial in the treatment of sarcopenia and deconditioning. Increased muscle mass and strength occur with loading the muscle at 60% to 80% of one-repetition maximum (1RM), two to three times a week. (Some researchers also recommend that at least once a week low-intensity, high-velocity resistance training should be done to address the loss of power that occurs with sarcopenia.)
Improvement in muscle strength and power through endurance exercise can reduce the difficulty older adults may experience in performing daily functional activities and may promote spontaneous additional physical activity. While sarcopenia owing to aging may not be reversible, other
components of the observed decline in physical activity can be ameliorated with exercise.
PHYSICAL ACTIVITY AND EXERCISE POSTREHABILITATION
Many people complete rehabilitative episodes of care with persistent impairments and loss of function. These individuals are at higher risk for the development of secondary and tertiary complications due to lack of physical activity and exercise. Facilitating the transition of individuals with disability from rehabilitation to community-based physical activity and exercise programs is one strategy that is proving to be successful in increasing activity. Physicians and other professionals should be aware of the importance of promoting healthy lifestyles in individuals postrehabilitation.
CONCLUSION
Because disability is common among older persons, rehabilitation is an important component of geriatric health care. Defining the cause or causes of disability will allow the rehabilitation team to provide treatment in the optimal setting for the individual patient. Much remains unknown about the most effective rehabilitation techniques for patients with multiple comorbidities; however, available literature supports the continued use of rehabilitation to improve function, independence, and quality of life for older persons.
FURTHER READING
Anderson L, Sharp GA, Norton RJ, et al. Home-based versus centre-based cardiac rehabilitation. Cochrane Database Syst Rev.
2017;6(6):CD007130.
Bourbeau J, Gagnon S, Ross B. Pulmonary rehabilitation. Clin Chest Med.
2020;41:513–528.
Gerhard-Herman MD, Gornik HL, Barrett C, et al. 2016 AHA/ACC guideline on the management of patients with lower extremity peripheral artery disease: a report of the American College of Cardiology/American
Heart Association Task Force on Clinical Practice Guidelines.
Circulation 2017;135(12):e686–e725.
Jette AM. Toward a common language for function, disability, and health.
Phys Ther. 2006;86(5):726–734.
Kumar KR, Pina IL. Cardiac rehabilitation in older adults: new options. Clin Cardiol. 2020;43:163–170.
Lee DJ, Costello MC. The effect of cognitive impairment on prosthesis use in older adults who underwent amputation due to vascular-related etiology: a systematic review of the literature. Prosthet Orthot Int.
2018;42(2):144–152.
Lee KJ, Um SH, Kim YH. Postoperative rehabilitation after hip fracture: a literature review. Hip Pelvis. 2020; 32(3):125–131.
Lindsay LR, Thompson DA, O’Dell MW. Updated approach to stroke rehabilitation. Med Clin N Am. 2020; 104:199–211.
Marzetti E, Calvani R, Tosato M, et al. Physical activity and exercise as countermeasures to physical frailty and sarcopenia. Aging Clin Exp Res. 2017;29(1):35–42.
McDonnell MN, Rischbieth B, Schammer TT, Seaforth C, Shaw AJ, Phillips AC. Lee Silverman Voice Treatment (LSVT)-BIG to improve motor function in people with Parkinson’s disease: a systematic review and meta-analysis. Clin Rehab. 2018;32(5):607–618.
Mehrholz J, Thomas S, Elsner B. Treadmill training and body weight support for walking after stroke (Review). Cochrane Database Syst Rev.
2017;8(8):CD002840.
Mora JC, Valencia WM. Exercise and older adults. Clin Geriatr Med.
2018;34(1):145–162.
Peel C, Sawyer-Baker P, Roth DL, et al. Assessing mobility in older adults: the UAB Study of Aging Life-Space Assessment. Phys Ther.
2005;85:1008–1019.
Pereira AP, Marinho V, Gupta D, Magalhaes F, Ayres C, Teixeira S. Music therapy and dance as gait rehabilitation in patients with Parkinson disease: a review of evidence. J Geriatr Psychiatry Neurol.
2019;32(1):49–56.
Rutherford RW, Jennings JM, Dennis DA. Enhancing recovery after total knee arthroplasty. Orthop Clin N Am. 2017;48:391–400.
Smith TO, Jepson P, Beswick A, et al. Assistive devices, hip precautions, environmental modifications and training to prevent dislocation and
improve function after hip arthroplasty. Cochrane Database Syst Rev. 2016;7(7):CD010815.
Stinear CM, Lang CE, Zeiler S, Byblow WD. Advances and challenges in stroke rehabilitation. Lancet Neurol. 2020;19:348–360.
Studenski SA, Perera S, Wallace D, et al. Physical performance measures in the clinical setting. J Am Geriatr Soc. 2003;51(3):314.
Treat-Jacobson D, McDermott MM, Bronas UG, et al. Optimal exercise programs for patients with peripheral artery disease. A scientific statement from the American Heart Association. Circulation.
2019;139:e10–e33.
Urits I, Seifert D, Seats A, et al. Treatment strategies and effective management of phantom limb-associated pain. Curr Pain Headache Rep. 2019;23:64.
Winstein CJ, Stein J, Arena R, et al. Guidelines for adult stroke rehabilitation and recovery. Stroke. 2016;47:e98–e169.
The Aging Brain
Luigi Puglielli
The success of modern medicine during the last century has been followed by a sharp increase in the average lifespan of the world population. As a result, we have transitioned to a society where problems linked to age-associated disabilities and diseases are highly prevalent. These disabilities and diseases currently absorb a growing fraction of the costs associated with health care management. Importantly, within the next decade, the disability caused by cognitive decline and dementia combined is expected to become the most expensive. Therefore, efforts to study how aging affects brain functioning and why these changes predispose us to cognitive decline and/or dementia have become a priority for our society. Many cellular and molecular aspects of brain aging are shared with other organ systems, including defective autophagy, reduced efficiency in maintaining protein homeostasis, accumulation of intracellular and extracellular protein aggregates, increased oxidative damage to proteins, nucleic acids and membrane lipids, and impaired energy metabolism. However, given the molecular and structural complexity of neural cells, which express approximately 50 to 100 times more genes than cells in other tissues, there are age-related changes that are unique to the nervous system. For example, complex cellular signal transduction pathways involving neurotransmitters, trophic factors, and cytokines that are involved in regulating neuronal excitability and plasticity are subject to modification by aging. These events are immediately reflected by changes in synaptic plasticity, functional
Mentation
SECTION C
connectivity, and global cognitive adaptability. This chapter describes cellular and molecular changes that occur in the brain during aging and how such changes may predispose to neurodegenerative diseases.
Learning Objectives
Understand how aging affects the brain at the histologic, cellular, and molecular levels, and how these changes are linked to cognitive decline and common neurodegenerative diseases.
Gain a clear understanding of the most prominent biochemical and molecular aspects of the aging brain.
Learn about the effects of aging on cognition and neurodegenerative diseases.
Recognize novel active areas of research in the field of cognitive neuroscience.
Learn how environmental factors influence normal brain aging.
Key Clinical Points
Given the steady increase in average human lifespan, the number of individuals who will experience some degree of cognitive decline is rising.
Healthy aging is associated with atrophy of the gray matter; however, it is not indicative of presence of a disease.
Comprehensive clinical evaluation is necessary to differentiate mild cognitive impairment and dementia from normal aging- associated cognitive decline.
Specific aging-associated histologic, cellular, and molecular changes can predispose to neurodegenerative diseases.
Environmental factors can be modified to mitigate potential effects of aging on brain functions.
AGING AND COGNITION
A large segment of the world population will experience some degree of cognitive decline during aging (Figure 56-1). This decline most typically affects working memory as well as short-term and delayed memory recall; however, a smaller group of individuals will also experience reduced information processing speed and spatial memory. Most of the decline appears to occur after the age of 60 with minimal or nonexistent changes occurring between the age of 20 and 60. Functional magnetic resonance imaging and positron emission tomography studies suggest that the cognitive decline might be linked to the progressive reduction in the volume of specialized memory-forming and processing brain areas (Figure 56-2).
Similar age-associated changes have been observed in nonhuman primates, dogs, rats, and mice, suggesting intrinsic age-associated events. Nonmedical interventions, such as physical exercise, cognitive stimulation, and diet together with treatment of common medical comorbidities, such as obesity, hypertension, hypercholesterolemia, diabetes, and metabolic/hormonal imbalances, might help mitigate age-associated declines.
FIGURE 56-1. Cognitive changes as a result of aging. Trends of normal aging, mild cognitive impairment (MCI), and dementia due to Alzheimer disease (AD) are shown. Due to the progressive increase in lifespan, a higher number of individuals can now reach the age where specific cognitive changes (as a result of normal aging, MCI, or AD) are observed.
It is widely projected that an exponentially increasing number of individuals will suffer from dementia and other cognitive disorders that will affect their daily function and increase mortality (see Figure 56-1). In 2000, the number of patients with dementia worldwide was estimated to be over 20 million. This number is projected to exceed 100 million by 2040, with an average of 5 million new cases every year. Rates of increase are not uniform. Developed countries initially shared most of the “dementia burden”; however, as a result of significant improvement in economic and social conditions, developing countries are now experiencing a larger disease burden. Consequently, the rate of increase in the number of patients with dementia is projected to be almost 100% in developed countries and over 300% in developing countries. Major attention is now being given to the diagnosis of mild cognitive impairment (MCI), a transitional stage between normal aging and dementia. Patients with MCI display more severe cognitive deficits and more evident brain structural changes. In general, when diagnosed, they retain independence and are able to function normally within the society. However, they are at high risk to develop dementia, most typically Alzheimer disease, with an annual rate of conversion between 10% and 15%. Careful memory evaluation, lifestyle changes, and medical correction of existing risk factors might help delay conversion into clinical dementia.
The ability of the brain to reorganize, develop, and prune neural pathways is called synaptic plasticity. There are two components of synaptic plasticity that are thought to be important for learning and memory formation: long-term potentiation (LTP) and long-term depression (LTD). LTP involves a rapid influx of calcium into the neuronal cell, leading to the enhancement of cell excitability by the activation of intracellular signaling cascades that increase protein transcription, translation, and the insertion of new receptors into the cell membrane. LTD has the opposite effect, modulating transcription, translation, and the induction of receptors back into the cell that leads to decreased cell excitability. It is commonly believed that LTP and LTD are the cellular correlates of learning and memory and, therefore, great attention has been devoted to understanding how they are influenced by aging. In general, aged rodents show consistent deficits in induction and maintenance of LTP in the cornu ammonis area 1 (CA1) and in the dentate gyrus, two essential memory-forming areas of the brain. These deficits correlate with performance in hippocampal-dependent memory tasks.
Furthermore, LTP is reduced in the hippocampus of aged rats that demonstrate cognitive impairments relative to aged unimpaired (resilient) rats. Studies have also demonstrated that aged impaired rats are more susceptible to LTD. Specifically, the stimulus threshold for the induction of LTD is lower in aged rats, perhaps making it easier to erase memories.
Finally, mouse models of accelerated aging display early and evident deficits in both LTP and LTD that correlate with memory impairment on behavioral tests.
In conclusion, the normal aging process is accompanied by specific changes in neuronal activities that are related to the formation and consolidation of memory, thus leading to a decline in cognitive functions. Given the significant increase in average lifespan that we are experiencing, the cognitive decline is becoming more evident forcing us to devote significant efforts in understanding the genetic, molecular, and biochemical changes that characterize the aging brain.
A similar important target of research is trying to understand why a subset of the aging population retains sufficient cognitive functions to remain fully functioning and independent throughout most of their life while others do not. This resistance or resilience to age-associated cognitive decline or to age-associated dementia is observed even in the face of similar post- mortem brain alterations. Indeed, brain abnormalities that are typical of the aging brain (as well as the diseased brain), such as amyloid plaques, neurofibrillary tangles (NFTs), Lewy bodies, vascular changes, cortical atrophy, and hippocampal sclerosis, can be equally observed in brain autopsy among individuals that displayed age-associated cognitive decline or age-associated dementia and those that were able to maintain their cognitive functions. Therefore, dissection of the genetic, molecular, and biochemical events that underlie this evident resistance and resilience to aging itself or to age-associated dementia might help us understand how the brain adapts to age and disease, and identify appropriate preventive/therapeutic strategies.
STRUCTURAL CHANGES IN THE AGING BRAIN
Aging is characterized by significant molecular, structural, cytoskeletal, neurochemical, and vascular changes (see Figure 56-2) in the brain.
Structural changes (Figure 56-3) are diffuse and affect the cerebellum as well. Gray matter volumes show a linear decline, while white matter
volumes show a nonlinear decline. At the cellular level, all major cell types in the brain undergo structural changes as a function of age. These changes include nerve cell death, dendritic retraction and expansion, synaptic loss and remodeling, and glial cell (specifically, astrocytes and microglia) reactivity. Such structural changes may result from alterations in cytoskeletal proteins and the deposition of insoluble proteins such as tau and α-synuclein inside neuronal cells and amyloid in the extracellular space. Finally, alterations in cellular signaling pathways that control cell growth and motility may contribute to both adaptive and pathologic structural changes in the aging brain. Table 56-1 provides a brief summary of the most prominent changes.
TABLE 56-1 ■ MOST PROMINENT STRUCTURAL CHANGES OF THE AGING BRAIN
FIGURE 56-2. Volumetric changes in the human brain as a function of age. A. Magnetic resonance imaging (MRI) sections from a 24-year-old healthy woman; B. MRI sections from an 80-year-old healthy woman (nondemented, Mini Mental State Examination = 30, APOE ε3/ε3). The older brain has more atrophy, larger sulci, larger ventricles, and different shape of ventricles due to loss of tissue. Atrophied cerebellum is also noticeable. C. Scatter plots of total gray matter volume (upper panel) and white matter volume (lower panel) derived from healthy volunteers who underwent T1-weighted MRI. Gray matter shows a linear decline with age, whereas white matter (largely myelin) shows a nonlinear decline. (Reproduced with permission from Dr. Barbara Bendlin, Department of Medicine, University of Wisconsin-Madison.)
FIGURE 56-3. Simplified view of changes affecting the brain as a function of age.
Functional Connectivity Changes
MRI-based analysis of structural and functional connectivity has revealed that the brain has highly connected regions, such as the hippocampal formation, cingulate gyrus, cuneus, precuneus, and superior frontal and parietal regions. These regions have a high density of intra- as well as inter- connectivity. Several cross-sectional studies support the conclusion that old age is associated with lower “within-network” and higher “between- network” connectivity. This shift is associated with reduced performance on memory- and executive-based tasks. The between network connectivity might reflect a compensatory attempt to maintain functional regional activity.
Interestingly, there is also evidence that older adults display increased bilateral brain activation, while younger adults display preferential unilateral activation to achieve the same task-based performance. A major caveat of these studies is that they are almost exclusively cross-sectional; indeed, the few longitudinal studies performed so far have generated inconclusive results.
Vascular Changes
As in other organ systems, vessels that supply blood to the brain are vulnerable to age-related atherosclerosis and arteriosclerosis, which render the vessels susceptible to occlusion or rupture (stroke), a major cause of disability and death in the older population. Reduced brain perfusion in the absence of overt stroke may play a role in age-related cognitive dysfunction. Decreased cerebral blood flow occurs with advancing age and is accompanied by declines in cerebral metabolic rate for oxygen and glucose use. Age-related changes in cerebral vasculature are generally similar to those that occur in vessels elsewhere in the body and are therefore likely to result from common cellular and molecular changes, including accumulation of atherosclerotic plaques, oxidative damage to vascular endothelial cells, and an inflammatory response in which macrophages may penetrate the blood-brain barrier.
Age-dependent cerebral vascular changes are strongly linked to heart disease and hypertension. Interestingly, apolipoprotein E polymorphisms are linked to increased risk of both atherosclerosis and Alzheimer disease, with the apolipoprotein E4 increasing the risk. This association suggests that age- related vascular changes may make an important contribution to the neurodegenerative process in Alzheimer disease. Finally, important transport
functions of cells (endothelial cells and astrocytes) that comprise the blood- brain barrier may also be impaired in the aging brain and more so in Alzheimer disease. Many of the same medical, behavioral, and dietary approaches now recognized to forestall cardiovascular disease may also forestall cerebrovascular disease; these approaches include correcting existing risk factors such as hypertension and hypercholesterolemia, engaging in physical and mental activities, preventing and/or correcting obesity, and taking a low-calorie and a high antioxidant diet.
Synaptic Changes
Synapses are dynamic structural specializations where neurotransmission and other intercellular signaling events occur. There is considerable evidence for synaptic “remodeling” in the brain as we age. Most studies using nonbiased methods indicate that aging does not induce a substantial loss of total neurons in memory-forming and processing areas of the brain. However, several studies in rodents and nonhuman primates indicate that neurogenesis decreases in the aging brain, with the greatest decline occurring in middle-age groups. In addition to decreased neurogenesis, evidence suggests that aging is accompanied by loss of synapses. As an example, the extent of synaptic loss in the hippocampus has been correlated to the severity of learning impairment observed in aged rodents, supporting the notion that the loss of hippocampal synapses directly contributes to the cognitive impairment. The synaptic loss observed in the hippocampus of aged rats may be due to the loss of both presynaptic and postsynaptic terminals.
Ultrastructural studies using electron microscopy have also revealed age- related changes in the type of terminals formed. For example, aged female monkeys with memory impairment have fewer multiple-synapse boutons and twice as many nonsynaptic boutons in the dentate gyrus. Furthermore, aged rodents and monkeys display significant loss of postsynaptic spines in the hippocampus and cortex associated with reduced cognitive performance.
Synapse loss occurs in neurodegenerative disorders and strongly correlates with clinical symptoms. Accumulating data suggest that synaptic degeneration, resulting from excitotoxic events localized to synapses, may initiate the neuronal death process in Alzheimer disease, Parkinson disease, Huntington disease, and stroke. Glutamate receptors are highly concentrated in postsynaptic dendritic spines, which represent sites of massive calcium influx during normal physiologic synaptic transmission. Age-related
decreases in energy availability and increases in oxidative stress, and disease-specific alterations, such as amyloid β-peptide (Aβ) accumulation in Alzheimer disease and trinucleotide expansions of the huntingtin protein, may render synapses vulnerable to excitotoxic injury.
Cytoskeletal Changes
The cell cytoskeleton consists of polymers of different sizes and protein compositions. The three major types of polymers are actin microfilaments (6 nm in diameter), microtubules (25 nm in diameter), which are composed of tubulin, and intermediate filaments (10–15 nm in diameter), which are made of specific intermediate filament proteins that differ in different cell types (eg, neurofilament proteins in neurons and glial fibrillary acidic protein in astrocytes). To regulate the processes of filament assembly and depolymerization, and to link the cytoskeleton to membranes and other cell structures, neurons and glial cells employ an array of cytoskeleton-associated proteins. For example, neurons express several microtubule-associated proteins (MAPs) that are differentially distributed within the complex architecture of the cells; MAP-2 is present in dendrites but not in the axon, whereas tau is present in axons but not in dendrites. While there are no major changes in the levels of the most abundant cytoskeletal proteins with aging, there are changes in the cytoskeletal organization and in posttranslational modifications (PTMs) of cytoskeletal proteins. For example, increased amounts of phosphorylated tau occur in some brain regions, particularly those involved in learning and memory (eg, hippocampus and basal forebrain). In addition, there is evidence that calcium-mediated proteolysis of MAP-2 and spectrin (a protein that links actin filaments to membranes) is increased in some neurons during aging. Oxidation of certain cytoskeletal proteins is suggested by studies demonstrating their modification by glycation and covalent binding of the lipid peroxidation product 4-hydroxynonenal. A consistent feature of brain aging in humans and laboratory animals is an increase in levels of glial fibrillary acidic protein, a marker of astrocyte activation, which may represent a reaction to subtle neurodegenerative changes.
Tau proteins are perhaps the most studied MAPs in neurobiology. They consist of a group of alternatively spliced proteins that ensure microtubule- dependent neuronal functions by binding and stabilizing the microtubules. In general, the degree of phosphorylation inversely correlates with binding. As
a result, hyperphosphorylated tau proteins dissociate from microtubules and aggregate to form cytosolic filaments, which ultimately result in NFTs, pathogenic aggregates that characterize different forms of age-associated frontotemporal dementias (globally referred to as tauopathies) as well as Alzheimer disease. The number and distribution of NFT also increase as a function of age. In addition to the phosphorylation status, the pro-aggregating properties of tau depend on the differential splicing of the MAPT gene, which results in the different tau isoforms. The pattern of phosphorylation and splicing of tau differs in the fetal and adult brain and in the central and peripheral nervous system. It also differs among mammals, probably explaining why only humans and nonhuman primates develop classical NFT. Work in multiple mouse models has shown that abnormal tau metabolism can cause memory deficits. Furthermore, genetic disruption of tau in mouse models of Alzheimer disease can rescue the cognitive deficits associated with the disease phenotype. Finally, mutations in the gene encoding tau proteins (MAPT) have been associated with hereditary forms of frontotemporal dementias, whereas polymorphisms in the same gene appear to act as genetic risk factors for sporadic progressive supranuclear palsy and corticobasal degeneration.
Intraneuronal pathogenic aggregates are also observed in other neurodegenerative diseases, underscoring common features. In Parkinson disease, degenerating neurons of the central nervous system accumulate Lewy bodies, which are composed of α-synuclein, ubiquitin, and neurofilament protein, with associated MAPs (particularly MAP-1b) and actin-related proteins such as gelsolin. In amyotrophic lateral sclerosis, lower motor neurons are filled with aggregates made of superoxide dismutase 1 (SOD1), TDP-43, and neurofilaments, which concentrate in proximal regions of the axon. The specific molecular events involved in the formation of cytoskeletal alterations in different neurodegenerative disorders have not been clearly and definitively established. More information on specific pathogenic aggregates that characterize different neurodegenerative diseases can be found in Chapters 59, 61, and 63.
Amyloid Accumulation
The amyloid β-peptide (Aβ) is a 38- to 49-amino acid peptide that arises from a much larger membrane-spanning precursor, the amyloid precursor protein (APP). During normal aging, and to a much greater extent in
Alzheimer disease, Aβ forms insoluble aggregates in the brain parenchyma and vasculature. Of the above different Aβ species, the 40- and 42-aa long peptides are by far the most abundant; Aβ40 preferentially accumulates in the vasculature, while Aβ42 preferentially accumulates in the parenchyma. Small Aβ aggregates (oligomers) can be neurotoxic and can increase neuronal vulnerability to metabolic, excitotoxic, and oxidative insults. Aβ aggregates are organized as fibrils of approximately 7 to 10 nm in the amyloid plaques (also referred to as senile plaques), which accumulate in the brain parenchyma and are intermixed with nonfibrillary forms of the peptide and often surrounded by dystrophic neurites.
In addition to Alzheimer disease brains, amyloid/senile plaques are also present in the hippocampus and neocortex of cognitively normal aged individuals and tend to increase in an age-dependent fashion. Although still under debate, it is now becoming apparent that the number and/or distribution of amyloid plaques does not immediately correlate with cognitive decline.
The synaptic loss and the number/distribution of NFTs appear to be better predictors of memory deficits. Indeed, the neuronal/synaptic loss of the hippocampal formation, entorhinal cortex, and entorhinal-hippocampal association system appears to be the only feature able to distinguish normal aging from MCI and the early stages of Alzheimer disease. Although amyloid plaques do not closely correlate with memory deficits, a high plaque load is rarely observed in the absence of cognitive impairment.
Aβ originates from proteolytic cleavage of a much larger precursor, the APP, a type I membrane protein that inserts in the endoplasmic reticulum (ER) before being transported to the neuronal surface. Proteolysis of APP can occur within the secretory pathway while transiting to the cell surface or on the cell surface itself. Substantial evidence indicates that intracellular Aβ aggregates are more pathogenic than amyloid plaques and strongly associate with neuronal and synaptic loss.
Detailed information on the mechanisms that lead to the generation and accumulation of Aβ in the brain as well as Alzheimer disease–relevant pathogenic features can be found elsewhere in this book (see Chapter 59).
Inflammatory Microglia
Microglia are different for other brain-resident cells in the sense that they do not originate from the neural tube. They derive from the yolk sac and invade the central nervous system at approximately the same time neurons are
formed. As the brain develops, microglia establish close connections with neurons and participate in many important developmental functions such as neurogenesis, synapse pruning, and modeling of synaptic networks. In the adult brain, microglia maintain strong connections with mature neurons and continue participating in essential neuronal functions. They respond to neuronal activity but are also capable of triggering neuronal responses. An essential aspect of microglia biology is their ability to respond to a variety of noxae and transition from a quiescent to an immune-active state. The quiescent state is recognized by a ramified morphology and is mostly maintained through soluble molecules that are secreted by healthy neurons. In the presence of neuronal damage, microglia assume phagocytic features and become ameboid-like. When activated, microglia can migrate to the site of injury and secrete proinflammatory cytokines that are essential during the acute phases of injury. Fundamental functions of activated microglia include elimination of damaged/dystrophic neurites, phagocytosis of cellular debris and toxic protein aggregates, promotion of neuronal repair, and neurotrophism.
Compelling evidence indicates that microglia can undergo a process of cellular aging that significantly alters their functions. “Senescent” microglia have dystrophic morphology with process abnormalities such as shortening, deramification, and swelling, as well as cytoplasmic fragmentation and cytorrhexis. They also display reduced motility and phagocytic abilities. The most consistent cellular features of senescent microglia include ER stress, reduced autophagy, accumulation of iron, and secretion of many proinflammatory molecules. This altered proinflammatory signature appears to be a fundamental aspect the aging brain. Importantly, the transcriptome of activated, senescent microglia differs significantly from the transcriptome of normal activated microglia supporting the concept that indeed these are two different cellular states. Such a profile is also observed during age- associated neurodegenerative diseases including Alzheimer disease, Parkinson disease, and tauopathies. Dystrophic “senescent-like” microglia are found in close proximity of amyloid plaques, NFTs, and Lewy bodies.
They also appear to react to protein aggregates such as Aβ, NFTs, or α- synuclein. Finally, mutations and genetic variants in genes related to microglia functions have been linked to different late-onset/sporadic forms of neurodegenerative diseases. Examples are TREM2 and CD33 in Alzheimer
disease, TLR2 and LRRK2 in Parkinson disease, and CSF1R in dementia with parkinsonism.
ALTERATIONS IN ENERGY METABOLISM AND MITOCHONDRIAL FUNCTION IN THE AGING BRAIN
During the aging process, changes that occur in cerebral blood vessels, as well as in the neural cells themselves, appear to result in reduced energy availability to neurons. These age-related changes may be accelerated in several different neurodegenerative disorders including Alzheimer disease and Parkinson disease.
Cerebral Metabolism
Reduced glucose use, and changes in enzymes involved in energy metabolism, may occur during normal aging, but are not dramatic. Studies of aging rodents document decreases in glucose and ketone body oxidation, oxygen consumption, local cerebral glucose use, and glycolytic compounds (eg, fructose-1,6-diphosphate). Additional studies show that brain cells in older animals exhibit increased vulnerability to metabolic stresses.
Incorporation of glucose into amino acids declines in the brain of aging mice, and older people are much more vulnerable to metabolic encephalopathy than young people. In contrast to normal aging, activities of several enzymes involved in energy metabolism are severely reduced in Alzheimer disease brain tissue. Three such enzymes, which are involved in mitochondrial oxidative metabolism, are the pyruvate dehydrogenase complex, the α- ketoglutarate dehydrogenase complex, and cytochrome c oxidase. These defects may result from age-associated oxidative damage to the DNA encoding these enzyme systems and/or reduced activity of the proteins in these systems.
Another factor that may contribute to reduced neuronal energy metabolism is impairment of the function of glucose transporter proteins in neuronal membranes. Studies of postmortem brain tissue of Alzheimer disease patients document reduced levels of glucose transporters, and experimental studies of cultured hippocampal neurons and synaptosomes show that insults relevant to the pathogenesis of Alzheimer disease (exposure to Aβ and oxyradical-generating agents) can impair glucose transport.
Impairment of glucose transport and mitochondrial dysfunction would be
expected to lead to ATP depletion and render neurons vulnerable to excitotoxicity.
Mitochondrial Function
Age-related structural changes in synaptic mitochondria have been reported and include a decrease in numbers and increase in size. During normal aging, levels of mitochondrial protein synthesis are unchanged. However, decreases in synthesis of specific mitochondrial proteins that are components of the electron transport chain occur in aging rodents. Damage to mitochondrial DNA progressively increases in somatic cells during the aging process, with the most pronounced damage occurring in postmitotic cells such as neurons. Mitochondrial dysfunction has been linked to several neurodegenerative disorders. In Parkinson disease, there are marked decreases in complex I and α-ketoglutarate dehydrogenase activities. Exposure of cultured dopaminergic neurons to insults relevant to the pathogenesis of Parkinson disease (eg,
MPTP and Fe2+) causes mitochondrial dysfunction. In Alzheimer disease, cytochrome c oxidase and α-ketoglutarate dehydrogenase activity levels are markedly reduced in vulnerable brain regions. Interestingly, mitochondrial deficits are also observed in nonneuronal cells, including platelets and fibroblasts, of Alzheimer disease patients. When mitochondria from platelets of Alzheimer disease patients are introduced into cultured neuroblastoma cells, levels of oxidative stress are increased, suggesting an important contribution of mitochondrial alterations to the increased oxidative stress present in neurons of Alzheimer disease brain. Mitochondrial alterations in neurons have been documented in studies of mouse models of Alzheimer disease (APP and presenilin mutant mice), Parkinson disease (α-synuclein mutant mice), Huntington disease (huntingtin mutant mice), and stroke (middle cerebral artery occlusion).
NEURONAL ION HOMEOSTASIS IN THE AGING BRAIN
Among the properties of neurons that set them apart from many other cell types is their excitability, which is regulated by a complex array of neurotransmitters and ion channels. Neurons express voltage-dependent sodium channels, as well as multiple types of calcium and potassium channels that are differentially expressed among neuronal populations, and are segregated in different cellular compartments (eg, L-type calcium
channels in the cell body, N-type calcium channels in the dendrites, and T- type calcium channels in presynaptic terminals). In addition, neurons possess ion-motive ATPases that play critical roles in reestablishing ion gradients following neuronal stimulation. A variety of age-related alterations in electrophysiologic parameters of neurons have been described in rodents (described earlier in this chapter) and, in some cases, in humans, including increased thresholds for induction of action potentials in cranial nerves, increased after hyperpolarizations in hippocampal neurons, and impaired LTP/LTD of synaptic transmission in the hippocampus. Moreover, a generalized decrease in neuronal inhibition appears to occur during the aging process.
The calcium ion plays fundamental roles in regulating neuronal survival and plasticity in both the developing and adult nervous system. Calcium mediates some of the effects induced by neurotransmitters and neurotrophic factors on neurite outgrowth, synaptogenesis, and cell survival in many different regions of the developing nervous system. Furthermore, in the adult nervous system, calcium regulates neurotransmitter release from presynaptic terminals and influences postsynaptic changes associated with learning and memory processes. Aging may result in decreases in the activity of the plasma membrane calcium ATPase and in levels of calcium-binding proteins, while increasing calcium influx through voltage-dependent channels and increasing the activation of calcium-dependent proteases.
Several studies have measured and determined changes in calcium transmission across the neuronal membrane in aged rodents. The final step in synaptic transmission that triggers LTP or LTD is dependent on the amount of calcium entering the cell. Therefore, changes in calcium conductance across the cell membrane could have significant effects on synaptic plasticity in aged animals. Studies have revealed that aged rats display an increased
density of L-type Ca++ channels in CA1 pyramidal cells, leading to increased L-type Ca++ currents. In addition, CA1 pyramidal cells from aged rodents also show an increased duration of calcium-mediated action potentials. The inward flux of calcium in response to an action potential activates a calcium-
mediated inward potassium current. These potassium channels are slower to open and close than calcium channels, which leads to a temporary after- hyperpolarization of the cell following an action potential. Studies have revealed that this phenomenon is increased in aged rodents and rabbits, leading to less frequent firing of action potentials in response to a prolonged
depolarizing current. Taken together, these data demonstrate that age-related changes in calcium transmission underlie detrimental changes in synaptic plasticity and may partially explain the deficits in plasticity seen over the course of aging.
NEUROTRANSMITTER SIGNALING IN THE AGING BRAIN
A number of alterations in different neurotransmitter systems have been documented in studies of aging rodents and in analyses of brain tissues from humans with age-related neurodegenerative disorders. While some of these alterations likely result from neuronal degeneration, others appear to occur in the absence of cell injury.
Cholinergic Systems
Acetylcholine is employed as a neurotransmitter in select populations of neurons in the brain, prominent among which are basal forebrain neurons that innervate widespread regions of the neocortex and hippocampus; these cholinergic neurons are known to play key roles in learning and memory processes in humans and rodents. Deficits in one or more aspects of cholinergic signal transduction may occur with aging including choline transport, acetylcholine synthesis, acetylcholine release, and coupling of muscarinic receptors to their GTP-binding effector proteins. Cholinergic deficits are much more severe in Alzheimer disease patients and differ qualitatively from the changes observed during normal aging. Particularly striking is a reduced ability of muscarinic agonists to activate GTP-binding proteins in cortical neurons. Increased levels of membrane lipid peroxidation in neurons may contribute to impaired cholinergic signaling; for example,
Aβ, Fe2+, and the lipid peroxidation product 4-hydroxynonenal can impair coupling of muscarinic receptors to the GTP-binding protein Gq11.
Dopaminergic Systems
Prominent reductions in both pre- and postsynaptic aspects of dopaminergic neurotransmission occur during brain aging. Decreases in dopamine levels and dopamine transporter levels in the striatum occur with advancing age, and there is an age-related decrease in levels of D2 receptor-binding sites in
striatum. As with cholinergic signal transduction, there also appears to be an
age-related impairment of coupling of dopamine receptors to their GTP- binding effector proteins. The contribution of oxidative stress to changes in dopaminergic signaling has not been established, although the prominent role of oxyradicals in the pathogenesis of Parkinson disease argues that similar oxidative processes contribute to dopaminergic dysfunction during normal aging. These changes in dopaminergic signaling likely play a role in age- related deficits in motor control and may explain the fact that the older adults are susceptible to extrapyramidal effects of dopamine receptor antagonist drugs.
Monoaminergic Systems
Norepinephrine and serotonin are the major monoamine neurotransmitters in the brain. Noradrenergic neurons are located primarily in the locus caeruleus and serotonergic neurons in the raphe nucleus; both types of neurons project to widespread regions of cerebral cortex. There are several subtypes of receptors for norepinephrine, some of which couple to GTP-binding proteins. There are also several subtypes of serotonin receptors; some couple to GTP-binding proteins while others are ligand-gated ion channels. There appears to be increased levels of norepinephrine with aging in some brain regions, while levels of α2-adrenergic receptors may decrease in
cerebral cortex with advancing age. Levels of serotonin may decrease in the striatum, hippocampus, and cerebral cortex. Age-related decreases in levels of evoked serotonin release and of serotonin-binding sites have been reported and may contribute to affective disorders such as depression.
Amino Acid Transmitter Systems
The amino acid glutamate is the major excitatory neurotransmitter in the human brain. Glutamate stimulates ionotropic receptors that flux calcium and sodium; excessive activation of ionotropic glutamate receptors may play a role in the degeneration of neurons in several age-related disorders including stroke, Alzheimer disease, Parkinson disease, and Huntington disease.
Levels of ionotropic glutamate receptors were reported to decrease with aging, but these decreases may be the result of degeneration of the neurons expressing the receptors. The contribution of dysfunction of glutamatergic transmission, in the absence of neuronal death, to age- and disease-related deficits in brain function is unknown. The major inhibitory neurotransmitter in the human brain is γ-aminobutyric acid (GABA). Relatively little
information is available concerning the impact of aging on GABAergic systems, although levels of glutamate decarboxylase and GABA-A binding sites may be decreased. Interestingly, GABAergic interneurons are typically spared in various neurodegenerative disorders including Alzheimer disease.
NEUROENDOCRINE CHANGES IN THE AGING BRAIN
A variety of age-related alterations in neuroendocrine systems have been documented. Of particular importance for human brain aging and neurodegenerative disorders are changes in levels of steroid hormones, particularly glucocorticoids and estrogens. There is considerable evidence for age-related alterations in the diurnal regulation of circulating glucocorticoid levels and an increase in the mean level. Moreover, regulation of the hypothalamic-pituitary-adrenal axis is altered in Alzheimer disease patients, such that plasma levels of glucocorticoids are increased.
Increased levels of glucocorticoids, including those induced by physiologic or psychological stress, can increase the vulnerability of hippocampal neurons to injury and death caused by ischemic and excitotoxic insults, suggesting that glucocorticoids may have a negative impact on the outcome of both acute (eg, stroke) and chronic (eg, Alzheimer disease) age-related neurologic disorders. Estrogen (17β-estradiol) may have a beneficial effect on brain aging. Epidemiologic studies suggest a reduced risk of Alzheimer disease in postmenopausal women who take estrogen replacement therapy, and older women who take estrogens perform better on cognitive tasks.
Surgical ovariectomy in rodents causes changes in neurotrophin receptor signaling, promotes the generation and accumulation of Aβ, and is associated with memory deficits. These effects are abrogated by exogenous estrogens.
Finally, animal and cell culture studies have shown that 17β-estradiol can protect neurons from being damaged and killed by insults relevant to ischemia and Alzheimer disease, including glucose deprivation, exposure to Aβ, and expression of Alzheimer disease–linked presenilin mutations.
Results of clinical findings, however, have not provided convincing evidence that estrogen treatment either reduces risk for Alzheimer disease or related illnesses or enhances cognition.
IMMUNOLOGIC FACTORS IN THE AGING BRAIN
While the blood-brain barrier limits access of circulating lymphocytes to neurons in the brain, it is becoming increasingly evident that the brain is by no means devoid of immune responses. The brain possesses resident immune effector cells called microglia (described earlier in this chapter) that may respond to age- and disease-related neurodegenerative processes. Some data suggest that a decline in peripheral immune function during aging may lead to an autoimmune-like phenomenon in the brain wherein microglia is activated. Inflammatory processes are associated with, and contribute to, the neurodegenerative process in Alzheimer disease and other age-related neurodegenerative disorders; these include activation of microglia in the affected brain regions, increased local cytokine production in association with the neuropathologic changes, and activation of components of the complement cascade system. Collectively, the emerging data suggest a role for chronic inflammatory reactions in the pathogenesis of at least some neurodegenerative disorders. Recent studies have revealed that the choroid plexus, an epithelial monolayer that produces cerebrospinal fluid (CSF) and maintains a dynamic interface between the CSF and the blood, undergoes age-associated changes that are consistent with a type I interferon effect.
Importantly, blocking type I interferon activity in aged mice is sufficient to reestablish a “young” choroid plexus activity and improve memory functions. Also intriguing is the fact that these changes appear to be driven by inflammatory changes that originate from both the brain parenchyma and the systemic circulation. Together, these results suggest that changes in inflammatory markers in the brain or in the periphery can influence (positively or negatively) memory functions; they also suggest that the choroid plexus plays an important role in the age-associated cognitive decline.
NEUROTROPHIC FACTORS IN THE AGING BRAIN
Cells in the nervous system produce a variety of proteins that serve the function of promoting neuronal survival and growth, and protecting the neurons against injury and death. Examples of such “neurotrophic factors” (also referred to as neurotrophins) include nerve growth factor (NGF), brain- derived neurotrophic factor (BDNF), neurotrophin-3, neurotrophin-4, basic fibroblast growth factor (bFGF), and insulin-like growth factor (IGF).
Neurotrophic factors are remarkable in that they can protect neurons against a variety of insults relevant to the pathogenesis of age-related
neurodegenerative conditions. For example, bFGF can protect hippocampal and cortical neurons against metabolic, oxidative, and excitotoxic insults, and can greatly reduce brain damage in rodent models of stroke. One or more neurotrophic factors have also been shown to protect neuronal populations against neurodegenerative disorder-specific insults. Examples are the ability of BDNF to protect dopaminergic neurons against MPTP toxicity (Parkinson disease model) and hippocampal neurons against Aβ-mediated toxicity (Alzheimer disease model). The precise mechanism(s) whereby neurotrophic factors can rescue or prevent neuronal degeneration are unclear and might be different in different neuronal populations. In addition to preserving existing neurons, neurotrophic factors may stimulate the production of new neurons from neuronal stem cells. Such neural stem cells may be able to replace lost or damaged neurons and are therefore receiving considerable attention for their potential use in the treatment of neurodegenerative disorders. An intriguing feature of neurotrophic factors is that their expression is increased by activity in neuronal circuits. Experimental studies in cell culture and in vivo show that such activity-dependent production of neurotrophic factors plays a major role in promoting neuronal survival and neurite outgrowth.
Rearing of rodents in an “intellectually enriched” environment results in expansion of dendritic arbors and increased numbers of synapses in hippocampus and certain regions of cerebral cortex.
The activity and the specificity of neurotrophic factors depend on specific cell-surface receptors. Examples are the tyrosine-kinase family of receptors, TrkA, TrkB, and TrkC, and the p75 neurotrophin receptor (p75NTR). These receptors can assemble as homo- and heterodimers, which display different affinity to specific neurotrophic factors and provide signaling specificity. For example, TrkA:p75NTR heterodimers have higher affinity to NGF than TrkA:TrkA and p75NTR:p75NTR homodimers. Therefore, depending on how these receptors assemble on the neuronal surface, they can transduce slightly different or completely different signals. During aging the levels of TrkA decrease while those of p75NTR increase, thus an expression pattern that favors the formation of TrkA:TrkA homodimers will switch toward an expression pattern that favors the formation of TrkA:p75NTR heterodimers and p75NTR:p75NTR homodimers. While this switch occurs, the levels of NGF remain overall unchanged. However, since the affinity of the above receptor complexes for NGF is different, the signaling cascade will
change as a result of aging. As an example, the above TrkA-to-p75NTR switch has been linked to cognitive decline and increased risk of Alzheimer disease in rodents. A TrkA-to-p75NTR switch has also been observed following surgical menopause in rodents, suggesting a possible impact in postmenopausal women.
The above competing activity of neurotrophin receptors is further complicated by the competing activity of neurotrophins. An example is BDNF competing with NGF for binding on TrkA:TrkA homodimers and pro- NGF competing with NGF for binding on p75NTR:p75NTR homodimers and TrkA:p75NTR heterodimers. Importantly, levels of pro-NGF increase as a function of age while those of NGF remain unchanged. Mice with increased levels of pro-NGF develop a brain phenotype that resembles Alzheimer disease, and postmortem Alzheimer disease brain tissues express more pro- NGF than age-matched controls. In conclusion, complex scenarios can result
as a function of aging leading to different neurotrophin-receptor interactions and, ultimately, to different biological effects. Since some of these changes are brain areas specific, we can expect different physiologic/pathologic outcomes.
AUTOPHAGY AND THE AGING BRAIN
Autophagy is an evolutionary conserved catabolic process that enables the cell to respond to both extracellular and intracellular stress signals in order to maintain cellular homeostasis. It helps dispose of large toxic protein aggregates that form within the cell, digest sick/damaged organelles, respond to pathogenic infections, and manage a plethora of metabolic challenges. The basic process of autophagy requires “sequestration” of unwanted material (ie, toxic protein aggregates or damaged organelles) into a double-membrane vesicular structure called autophagosome, which then will fuse with lysosomes where enzymatic digestion of the entire autophagosome occurs.
This process is tightly regulated and involves more than 40 different proteins collectively referred to as the “autophagy machinery.” To ensure correct activation and recruitment of this machinery, the cell has devised a series of organelle- and compartment-specific receptors that allow autophagy to be activated in a very specific and targeted fashion. Words like reticulophagy/ER-phagy, mitophagy, and nucleophagy are examples of commonly used expressions to indicate ER-specific autophagy,
mitochondria-specific autophagy, and nucleus-specific autophagy, respectively.
Malfunction of autophagy contributes to the progression of many diseases across lifespan, including sporadic and hereditary forms of neurodegeneration, cancer, inflammation, and many age-associated diseases. Reduced or malfunction of autophagy is also considered a hallmark of cellular aging. Many progressive forms of neurodegenerative diseases, including Alzheimer disease, Parkinson disease, and taupoathies are characterized by the aberrant accumulation of toxic protein aggregates.
Compelling data indicate that increased levels of autophagy can be beneficial in mouse models of diseases characterized by increased accumulation of toxic protein aggregates. As such, improving normal proteostatic mechanisms is an active target for biomedical research, thus explaining the large number of ongoing clinical trials exploring the therapeutic potential of stimulating autophagy.
Genetic or pharmacological stimulation of autophagy in Caenorhabditis elegans, Drosophila Melanogaster, and mammals improves metabolic health and proteostasis (protein homeostasis), and extends lifespan. The lifespan extension can go from as low as 20% to as high as 80% indicating a rather robust effect. Conversely, genetic manipulations that reduce or block the autophagic response lead to reduced lifespan and a plethora of phenotypic manifestations that are consistent with accelerated forms of pathogenic aging. An example of this complexity is the ER acetylation machinery, which is responsible for the disposal of toxic protein aggregates that form within the ER and secretory pathway. ER acetylation is ensured by a membrane transporter, AT-1, which translocates acetyl-CoA from the cytosol into the ER lumen. In the ER lumen, acetyl-CoA is used by two ER- based acetyltransferases, ATase1 and ATase2, which acetylate ER resident- and transiting-nascent glycoproteins. Genetic manipulations that lead to increased influx of acetyl-CoA into the ER and increased ER acetylation in the mouse (AT-1 sTg mice) cause a segmental form of progeria. The phenotype is reminiscent of the progeria-like manifestations of patients with gene duplications of AT-1, and includes alopecia, skin lesions, rectal prolapse, osteoporosis, cardiomegaly, muscle atrophy, cognitive impairment, systemic inflammation, and accumulation of senescent cells. Biochemical inhibition of the ATases, downstream of AT-1 can restore the proteostatic function of the ER and rescue the progeria phenotype of AT-1 sTg mice.
Furthermore, genetic or biochemical inhibition of the ATases can significantly delay the progression of Alzheimer disease in the mouse.
A major caveat of the translational approaches aimed at stimulating autophagy and improving the proteostatic capacity of the cell is the fact that
—as mentioned above—autophagy itself is highly selective and can be triggered to dispose of toxic protein aggregates in different places (ie, ER and secretory pathway, cytosol, mitochondria). Therefore, a “one-approach strategy” that fits all neurodegenerative diseases will not be viable, and different autophagy-stimulating compounds will have high degree of disease specificity.
GENETIC FACTORS IN BRAIN AGING AND NEURODEGENERATIVE DISORDERS
Several microarray studies have now been performed in mice, rats, nonhuman primates, and humans to analyze the genetic profile of the aging brain. They all indicate that normal aging is not accompanied by a genome- wide dysregulation of transcription but rather by specific changes that affect only a small subset of genes, which accounts for about 3% to 5% of all the genes expressed in the brain. These changes (Table 56-2) will not be discussed here as their overall biological significance is still uncertain. Of note, a similar expression profile was also found in patients with Alzheimer disease supporting the view that a continuum might exist between normal aging and Alzheimer disease.
TABLE 56-2 ■ GENETIC PATHWAYS OF THE BRAIN THAT ARE AFFECTED BY AGING
Longevity Genes
It is now evident that specific genetic, biochemical, and molecular pathways are intrinsically related to aging. Some of these pathways were initially identified in lower organisms and then later confirmed in higher organisms, while others were immediately identified in higher organisms. These include the insulin-like growth factor 1 receptor (IGF-1R), Delta40p53, and Klotho.
Among them, IGF-1R has probably received more attention. The first evidence that IGF-1R signaling can regulate the progression of aging came from C elegans, where mutations that reduce IGF-1R activity were able to increase the lifespan of the animal. Similar results were then obtained in D melanogaster and, later on, in mammals. Importantly, mutations that act on
IGF-1R downstream targets were also able to modify the lifespan of the animals providing robust and definitive connection between IGF-1R and aging. Overall, reduced IGF-1R activity extends lifespan and delays age- associated events, while increased IGF-1R activity achieves the opposite effects. A partial block of IGF-1R signaling is also achieved by caloric restriction, which extends the maximum lifespan and delays many biological changes that are associated with aging. In humans, genetic variations that reduce IGF-1R signaling appear to be beneficial for old age survival and preservation of cognitive functions, suggesting that the mechanisms regulating lifespan and aging via this pathway are evolutionary conserved. Finally, reduced IGF-1R signaling can rescue Alzheimer disease in mouse models.
Delta40p53 is a short N-terminal truncated isoform of the tumor suppressor gene TP53. The common TP53 gene generates at least 12 different proteins through a combination of alternative promoter usage, alternative splicing, and alternative initiation of translation. Of them, p53 is the most “famous” and most studied due to its pathogenic role in many forms of cancer. Delta40p53 (also referred to as p44 or p47) lacks the first N- terminal 39 amino acids of full-length p53 but retains the DNA-binding domain as well as one of the two transactivation domains. Delta40p53 retains some of p53 functions but lacks others; it retains some of the regulatory elements but lacks others. Mice overexpressing full-length p53 (“Super p53” mice) are resistant to cancer development but have normal
maximum lifespan. In contrast, mice overexpressing Delta40p53 (p44+/+ mice) display a segmental progeria phenotype that mimics an accelerated form of pathogenic aging. The phenotype includes early onset of diabetes, osteoporosis, memory impairment, and reduced lifespan. To date at least five additional mouse models with altered p53 activity have been shown to develop an accelerated aging phenotype providing robust and definitive connection between TP53 and aging. Recent studies have also shown that Delta40p53 regulates the generation of Aβ as well as the phosphorylation status of tau, suggesting a possible connection with the accumulation of amyloid plaques and NFTs that characterizes the aging as well as the Alzheimer disease brain (discussed earlier in this chapter). Interestingly, mice engineered to overexpress p44 develop an accelerated form of Alzheimer disease neuropathology.
Klotho is a cell-surface membrane protein that can also be released in the circulation. Klotho has glycosidase-like activity. Its levels and enzymatic
activity decline during aging. Genetic variants of KLOTHO are associated with human aging and klotho-deficient mice display a progeroid phenotype that resembles aging. The phenotype includes atherosclerosis, skin atrophy, osteoporosis, reduced fertility, emphysema, memory defects, and reduced lifespan. In contrast, mice overexpressing klotho display increased lifespan and increased resistance to several age-associated features. A possible connection between klotho and IGF-1R has also been delineated. Human variations that lead to increased circulating levels of klotho are associated with greater cortical volumes in the brain. Finally, overexpression of klotho in the mouse enhances cognition and rescues some of the deficits that characterize Alzheimer disease.
In addition to the above, polymorphisms that are associated with longevity and preserved physiologic functions have also been identified in two cholesterol-related genes, apolipoprotein E (APOE) and cholesteryl ester transfer protein (CETP). APOE carries cholesterol in the circulation, while CETP facilitates the transfer of cholesteryl esters and triglycerides between circulating lipoproteins. Individuals normally inherit two APOE alleles, of which there are three isoforms (E2, E3, and E4). The E2 allele has been linked to longevity and reduced incidence of Alzheimer disease. In contrast, the E4 allele has been linked to increased risk for Alzheimer disease. Certain CETP variants have been linked to increased levels of high- density lipoproteins (HDLs), reduced progression of atherosclerosis, and longevity. The “longevity effect” of APOE and CETP might be explained, at least in part, by their ability to reduce the progression of atherosclerosis in the vasculature.
Another possible longevity gene is that encoding an isoform of angiotensin-converting enzyme, although its mechanistic links to aging are unclear. Finally, the multigene major histocompatibility system appears to influence lifespan and may act by sustaining functions of the immune system. Additional information on longevity genes and their impact in age-associated events can be found in Chapter 1.
Disorder-Specific Genes
Considerable progress has been made in studying human genetic disorders that cause progeroid syndromes, which are characterized by a disease phenotype mimicking an “accelerated” form of aging. Although progeroid syndromes can be classified as unimodal (affecting one tissue/organ) and
segmental (affecting multiple tissues/organs), the term progeroid syndrome is usually limited to the segmental forms of the disease (Table 56-3).
Examples include Werner syndrome, Bloom syndrome, Rothmund–Thomson syndrome, Hutchinson–Gilford progeria syndrome, and Cokayne syndrome. Patients affected by these disorders have limited lifespan and develop a complex array of disease manifestations, including type 2 diabetes, osteoporosis, hair loss, skin atrophy, atherosclerosis, cardiomyopathy, heart failure, chronic obstructive pulmonary disease, renal insufficiency, and neurologic abnormalities. The neurologic defects are often subtle and difficult to diagnose; when evident, they include bulbar, extrapyramidal and cerebellar symptoms, deafness, retinopathy, cognitive deficits, corticospinal symptoms, and peripheral neuropathy. Brain imaging shows diffused white matter pathology as well as different degrees of gray matter pathology in memory-forming and processing areas. The genetic defect has been mapped in most classical forms of progeroid syndromes. Specifically, Werner syndrome has been associated with mutations in WRN; Bloom syndrome has been associated with mutations in BLM; Rothmund–Thomson syndrome has been associated with mutations in RECQL4; Hutchinson–Gilford progeria syndrome has been associated with mutations on LMNA; and Cokayne syndrome has been associated with mutations in ERCC8 and ERCC6. All the above genes encode proteins that are involved in different aspects of DNA transcription, repair, or recombination underscoring common pathogenic elements.
TABLE 56-3 ■ SEGMENTAL PROGEROID SYNDROMES WITH COGNITIVE CHANGES
Although progeroid syndromes manifest with symptoms and physical features that are—at least in part—reminiscent of an accelerated form of aging, they must be viewed as diseases rather than true forms of accelerated aging. It is plausible to assume that the dissection of the molecular phenotype of these diseases will inform us about aging. However, it is also plausible to expect that the underlying defects of human progerias are substantially different from those of normal aging. Additional discussion of human progerias can be found in Chapter 1.
Considerable progress has also been made in identifying genetic factors that play pathogenic roles in Alzheimer disease, the most common form of dementia associated with aging. Specifically, disease-causing mutations in APP, PSEN1, and PSEN2 have been identified in familial (early-onset) forms of Alzheimer disease. In addition, several “predisposition” genes have been identified in which polymorphisms increase the risk for developing sporadic
(late-onset) Alzheimer disease. Description of the most important polymorphisms identified to date as well as pathogenic roles of APP, PSEN1, and PSEN2 can be found in Chapter 59.
The Brain as a Regulator of Lifespan
Among the very first mouse models of extended lifespan were the Ames and the Snell dwarf mice. Both models had a selective defect in the secretion of key hormones from the pituitary gland, specifically, growth hormone (GH), prolactin, and thyroid-stimulating hormone. The 40% to 60% extension of lifespan was also accompanied by evidence of delayed aging. The increased lifespan of the Ames and Snell mice was primarily linked to the deficiency of the GH-IGF1 axis. GH is released in the circulation by the pituitary gland; upon binding to its own receptor in the liver, it causes secretion of IGF1, which then binds to IGF-1R and stimulates IGF-1R signaling. Mice with isolated deficiency in growth hormone secretion (Little mice) or lacking the growth hormone receptor (GHR-KO mice) also displayed longevity. The lifespan extension in the Little and GHR-KO mice was in the 25% to 55% range. Finally, mice with reduced secretion of IGF1 or reduced levels of
IGF-1R also displayed different levels of increased lifespan. In essence, the Ames and Snell mice represent the very first evidence that the brain itself (or specialized sets of neurons in the brain) could influence the lifespan of the animals. Following studies in lower organisms (C elegans and D melanogaster) clearly confirmed this conclusion. More recently, genetic disruption of IRS2, an adaptor protein that acts downstream of IGF-1R and
the insulin receptor, in the mouse brain (bIrs–/+ and bIrs–/–) also caused a significant increase in the lifespan of the animals. Regardless of the specific mechanism(s) involved in the phenotype of the above genetic models, it is now well accepted that in addition to being affected by aging, the brain (or the nervous system) itself can affect aging.
EMERGING AREAS OF STUDY
Epigenetics
A series of transcriptome-based studies across different species indicate that the aging brain is characterized by global changes within different genetic pathways. In general, genes involved in vesicular transport, synaptic turnover, and plasticity appear to be reduced, while genes involved with
neuroinflammation and stress response appear to be upregulated. These changes are likely the result of alterations in the normal mechanisms of epigenetic regulation of gene expression, which include DNA methylation, histone PTMs, and small noncoding microRNA (miRNA).
DNA methylation of the pyrimidine ring of cytosine within CpG and non-CpG sites is associated with repression of transcription. Studies in aged cognitively animals have reported hypermethylation among synapse-related genes and hypomethylation among neuroinflammatory genes. However, these
findings have not been consistently replicated across laboratories and animal species. This inconsistency might—at least in part—be explained by the different behavior of CpG and non-CpG sites with the former remaining unchanged and the latter becoming hypermethylated as a function of age.
However, to date, the effect of age on DNA methylation in the brain remains unclear.
Histone PTMs include acetylation, methylation, phosphorylation, ubiquitination, and sumoylation. The effect of these modifications remains to be fully understood as it might depend on the specific modification, the extent of modification, and the location of the histone tag. As with DNA methylation, no general conclusions can be drawn on the effect of age on histone PTMs. Furthermore, the PTM profile of histone proteins does not appear to highlight specific changes in memory functions among young or old animals or to distinguish cognitively normal and impaired aged humans.
miRNAs are small (18–25 nucleotide long) noncoding RNAs that can pair with complementary sequences within coding mRNAs and inhibit translation. An intriguing aspect of miRNAs is that they can be found packed into small single-membrane circulating vesicles referred to as exosomes.
Exosomes appear to originate from different cells and tissues and can cross biological membranes. As such, they might function as a general communication system connecting different cells, tissues, and organs by delivering cargo molecules. No study has reported consistent changes in either levels or types of miRNAs as a function of brain aging. However, the fact that miRNAs are present within exosomes has spurred great interest. So far, differences in experimental approaches as well as inconsistent characterization of exosomal preparations have limited the study outcomes. Furthermore, it is still unclear whether they can enter the brain or can interact with neurons (or other cell types) to deliver miRNA with high specificity.
The Glymphatic System
The glymphatic system serves as a “waste drainage” system for the brain. It includes a perivascular network for the flux of the CSF connected to the lymphatic system that is associated with meninges, cranial nerves, and large vessels exiting the skull. The flux of CSF is achieved by convection through the astrocyte end-processes and toward the perivenous space to ultimately reach the lymphatic system. CSF transport across the astrocyte network requires aquaporin 4 (AQP4), a member of the AQP family of water channels. Animal studies suggest reduced efficiency of glymphatic-mediated transport of cargo material as a function of age. A similar decline has been documented in mouse models of Alzheimer disease, stroke, and traumatic brain injury. Aging, stroke, and traumatic brain injury are the strongest risk factors for Alzheimer disease. Although it is still unclear how important the glymphatic system is for the removal of macromolecules from the brain, there are sufficient data indicating that both Aβ and tau oligomers use—at least in part—this route. Consistently, genetic knockout of AQP4 in the mouse appears to impair Aβ removal. A major area of concern is the fact that the glymphatic system has been almost entirely characterized in the rodent brain and it is unknown whether an identical system is in place in humans and whether it is efficient enough to play a fundamental role in the removal of macromolecules.
DIETARY FACTORS IN BRAIN AGING AND NEURODEGENERATIVE DISORDERS
It is now evident that the diet can affect one’s risk of age-related neurodegenerative disorders. In particular, emerging findings indicate that several dietary risk factors for prominent age-related diseases including cardiovascular disease, cancer, and diabetes are also risk factors for Alzheimer disease, Parkinson disease, and stroke. A summary of possible preventive measures to improve brain health can be found in Table 56-4.
TABLE 56-4 ■ PREVENTIVE MEASURES TO PROTECT THE AGING BRAIN
Calorie Intake
Apart from genetic manipulation, the only known means of increasing the lifespan of rodents and nonhuman primates is by decreasing their calorie intake; both maximum and mean lifespan can be increased by up to 40%. The average lifespan of humans is certainly decreased by overeating, although it remains to be determined whether maximum lifespan can be increased through calorie restriction. Epidemiologic data suggest that individuals with a low-calorie intake are at a reduced risk for Alzheimer disease and Parkinson disease. Biochemical markers of aging, and deficits in learning and memory and motor function, are retarded in rodents maintained on dietary restriction. Neurons in the brains of rats and mice maintained on dietary restriction are more resistant to dysfunction and death in experimental models of Alzheimer disease, Parkinson disease, Huntington disease, and stroke. A 20-year longitudinal study in Rhesus macaques showed that a 30% reduction in daily calorie intake reduced the rate of age-related deaths as well as the incidence of typical age-associated diseases, such as diabetes, cancer, cardiovascular disease, and brain atrophy. The ability of calorie restriction to prevent brain atrophy was observed across several gray matter areas involved in both motor and executive functions. Magnetic resonance
imaging also showed preserved volumes and insulin sensitivity in memory- forming and processing areas, while postmortem histology showed reduced astrogliosis. Therefore, a large body of evidence from lower organisms to nonhuman primates delineates a relationship between calorie intake and progression of aging and/or incidence of age-associated events. The mechanism whereby caloric restriction increases the resistance of neurons to the adverse effects of aging is not entirely known. Studies in yeast, worms, flies, rodents, and monkeys suggest that the reduced calorie intake causes a comprehensive metabolic reprogramming that involves key nutrient- responsive signaling pathways.
Folic Acid (Homocysteine)
It was recognized long ago that folic acid deficiency can cause abnormalities in the developing nervous system. Subsequently, it was shown that people with low levels of folic acid tend to have elevated levels of homocysteine and that this condition is associated with increased risk of cardiovascular disease and stroke. Homocysteine is produced during metabolism of methionine, and folic acid plays an important role in removing homocysteine via remethylation. Epidemiologic findings suggest that people with elevated homocysteine levels may be at increased risk of Alzheimer disease and Parkinson disease. Animal studies support a cause-effect relationship between elevated homocysteine levels and neuronal vulnerability to neurodegenerative disorders. For example, hippocampal neurons of APP mutant mice (Alzheimer disease model) and substantia nigra dopaminergic neurons in MPTP-treated mice (Parkinson disease model) exhibit increased vulnerability to degeneration when maintained on a folic acid–deficient diet. By increasing homocysteine levels, folic acid deficiency may promote accumulation of DNA damage by inhibiting DNA repair. In neurons, the increased DNA damage can trigger apoptosis, particularly under conditions of increased oxidative or metabolic stress. Of note, results of clinical studies to date have failed to provide any convincing evidence that folic acid supplementation either reduces risk for dementia or vascular diseases.
Stimulatory Phytochemicals and Antioxidants
Epidemiologic findings suggest that individuals who regularly consume vegetables and fruits have a lower risk of developing age-related neurodegenerative disorders compared to those who eat few such plant
products. Several phytochemicals have been reported to enhance neuronal plasticity and survival in studies of animal models of neurodegenerative disorders. Examples include sulforaphane, curcumin, allicin, resveratrol, and other grape-derived polyphenols. Instead of functioning as direct antioxidants, many of these beneficial phytochemicals may stimulate mild adaptive stress responses that result in increased production of antioxidant enzymes, neurotrophic factors, and other protective proteins in neurons. The possible therapeutic benefits of several such stimulatory phytochemicals are currently being tested in human subjects with age-related neurologic disorders.
Epidemiologic findings also support a protective effect of antioxidants found in fruits and vegetables against stroke, and possibly Alzheimer disease. Studies in animal models of Alzheimer disease, Parkinson disease, Huntington disease, stroke, and amyotrophic lateral sclerosis have documented beneficial effects of some antioxidants. Positive effects have been reported for several commonly used dietary supplements, including vitamin E, creatine, and ginkgo biloba. However, the effects of such antioxidants are relatively subtle compared to the quite striking neuroprotective effects of dietary restriction.
PHYSICAL VERSUS COGNITIVE TRAINING AND THE AGING BRAIN
Several studies have revealed that both cognitive and physical training of older adults can cause changes in brain connectivity that is reflected in improved cognitive functions. Proposed mechanisms responsible for the positive changes imparted upon by physical training include angiogenesis and improved vascular functions, synaptogenesis, and neurotrophin-mediated signaling. Comparison of cognitive and physical training shows almost similar positive effects with the exception of the hippocampus and frontal brain regions, where physical training appears to induce effects that are more robust. When studying changes within specific cognitive domains, both trainings positively affect executive functions, as well as working-, short- and long-term memory. However, physical training appears to be more effective with spatial and speed memory, while cognitive training appears to be more effective with problem solving and multitasking. Whether physical and cognitive training can complement each other is currently unclear,
although some studies do report complementary improvement of structural and functional connectivity.
BIOMARKERS OF AGING, BRAIN AGING, AND ALZHEIMER DISEASE
Given the steady increase in average human lifespan, there is a substantial need to identify old individuals that are at higher risk for morbidity or mortality. Several studies have tried to address this need by identifying “static” and/or “dynamic” biomarkers of aging that could predict the future onset of age-associated diseases. Ideally, such an instantaneous and unbiased profile would provide valuable information about the “biological age” of the body and the overall vulnerability to diseases and death. Such a “snapshot” would also provide opportunities to investigate how different variables, such as socioeconomic status, demographic distribution, and behavior, can interact with biological and genetic factors to influence rates of aging and disease vulnerability in individuals. The list of potential biomarkers differs among the different studies and can encompass more than 40 different variables (selected markers are listed in Table 56-5). In general, markers of cognitive performance, systemic inflammation, cardiovascular disease/failure, and metabolic imbalance appear to be strong predictors.
However, there is discussion on whether static or dynamic profiles are more informative. For example, visit-to-visit variability of blood pressure appears to be a stronger predictor of cardiovascular risk and all-cause mortality among the 70 to 75 age group than stable moderate hypertension. In the case of Alzheimer disease, longitudinal changes in CSF- and MRI-based biomarkers and in cognitive test scores are stronger predictors. In essence, there is consensus that maintaining stable trajectory of certain “biomarkers” or risk factors for major age-associated diseases may be more beneficial to reduce both morbidity and mortality. The identification of a biomarker profile is complicated by the genetic profile and the socioeconomic makeup of the individual (and population). This is particularly evident when trying to assess cognitive abilities and functional independence, which are heavily influenced by intrinsic individual- and race-based differences.
TABLE 56-5 ■ BIOMARKERS OF AGING AND ALZHEIMER DISEASE
The steady increase in average human lifespan has also been accompanied by a steady increase in both prevalence and incidence of Alzheimer disease. As discussed in Chapter 59, clinical diagnostic criteria for Alzheimer disease have limited accuracy, do not specifically separate Alzheimer disease from other neurodegenerative diseases, and do not correlate strongly with the severity of the neuropathology as assessed at autopsy. Therefore, there has been great effort to identify biomarkers of Alzheimer disease with the purpose of differentiating Alzheimer disease from other neurodegenerative diseases and from “normal” brain aging. In general, CSF-based biomarkers have shown much higher predicting power than peripheral blood-based biomarkers (see Table 56-5). Due to their central pathogenic role, it is not surprising that Aβ and tau have emerged as the strongest predictors. In the case of Aβ, the progressive decrease in CSF Aβ42 differentiates Alzheimer disease patients from cognitively
normal/stable individuals and correlates with the severity of the pathological changes observed at autopsy. Although there is no experimentally proven explanation for the association between reduced CSF Aβ42 measured in
living patients and the amyloid plaque load observed at the autopsy, the fact that they correlate has been consistently documented. The predicting power of measuring CSF Aβ improves when both the 42 and 40 amino acid-long forms of the peptide are measured. Indeed, the Aβ42/Aβ40 ratio shows higher
concordance with amyloid PET imaging. Again, although there is no experimentally proven explanation for the increased predicting value of the Aβ42/Aβ40 ratio, the association with the severity of the disease shows a mean sensitivity and specificity around 85% to 90%. Furthermore, the CSF Aβ42/Aβ40 ratio appears to resolve nonspecific features of Aβ42 measures alone. Indeed, decreased CSF Aβ42 levels are also observed in other forms
of dementia, such as frontotemporal dementia and Lewy body dementia, while changes in CSF Aβ42/Aβ40 ratio are not. CSF levels of total tau appear to correlate very strongly with the severity of AD patients. Tau is a cytosolic protein and its release to the extracellular space can only occur as a result of neurodegeneration. As such, it is not surprising that levels of tau reflect the severity of the neurodegenerative damage. Although CSF levels of total tau do not differentiate Alzheimer disease from other neurodegenerative diseases, levels of phosphorylated (phospho)-tau appear to be quite specific for Alzheimer disease. Currently, different biochemical assays are available for the analysis of phospho-tau species. Combining Aβ and tau measures in the CSF offers improved diagnostic potential even in prodromal stages of the disease. The combination of these biomarkers with amyloid-based PET imaging seems to offer a very consistent and reliable way to follow the progression of the Alzheimer form of dementia in an unbiased fashion.
Among emerging CSF biomarkers of brain aging and neurodegeneration, neurogranin and the neurofilament light chain appear to hold the strongest potential. Both are cytosolic proteins and, therefore, as with tau, they reflect neurodegenerative events rather than early forms of neuronal toxicity. When looking at “naturally secreted” CSF biomarkers, the BDNF and the pro-nerve growth factor (pro-NGF) appear to be among the strongest predictors.
Indeed, several studies have reported decreased CSF levels of BDNF and increased levels of pro-NGF as part of aging. Both BDNF and pro-NGF are members of the large family of neurotrophins and play fundamental biological functions for brain physiology. As such, they are particularly attractive. Neurotrophins and their receptors have been discussed earlier in this chapter.
CONCLUSION
Structural changes occur in the brain during aging and appear to be compensatory responses to adverse changes in cellular metabolism that
accompany the aging process. There are several biochemical processes that may predispose neurons to dysfunction and death in aging and neurodegenerative disorders. The discovery that the lifespan of an organism can be regulated by specific genetic, molecular, and biochemical pathways has changed our view of aging itself and has spurred active interest in studying the basic biology of aging to understand diseases. At the same time, disease-driven research has helped discover pathways that are relevant for aging. Convergence between age- and disease-related research has yielded an unprecedented body of information that will help us understand how the brain evolves and adapts to aging and how we can modulate these changes to improve the quality of life of a growing segment of our population.
FURTHER READING
Bishop NA, Lu T, Yankner BA. Neural mechanisms of ageing and cognitive decline. Nature. 2010;464:529.
Blennow K, Zetterberg H. Biomarkers for Alzheimer’s disease: current status and prospects for the future. J Intern Med. 2018;284:643.
Bonafe M, Barbieri M, Marchegiani F, et al. Polymorphic variants of insulin- like growth factor I (IGF-I) receptor and phosphoinositide 3-kinase genes affect IGF-I plasma levels and human longevity: cues for an evolutionarily conserved mechanism of life span control. J Clin
Endocrinol Metab. 2003;88:3299.
Cohen E, Paulsson JF, Blinder P, et al. Reduced IGF-1 signaling delays age- associated proteotoxicity in mice. Cell. 2009;139:1157.
Colman RJ, Anderson RM, Johnson SC, et al. Caloric restriction delays onset and mortality in rhesus monkeys. Science. 2009;325:201.
Costantini C, Scrable H, Puglielli, L. An aging pathway controls the TrkA to p75NTR receptor switch and amyloid β-peptide generation. EMBO J.
2006;25:1997.
de Cabo R, Carmona-Gutierrez D, Bernier M, Hall MN, Madeo F. The search for antiaging interventions: from elixirs to fasting regimens. Cell. 2014;157:1515.
Dubal DB, Yokoyama JS, Zhu L, et al. Life extension factor Klotho enhances cognition. Cell Rep. 2014;7:1065.
Frake RA, Ricketts T, Menzies FM, et al. Autophagy and neurodegeneration.
J Clin Invest. 2015;125:65.
Freude S, Hettich MM, Schumann C, et al. Neuronal IGF-1 resistance reduces Aβ accumulation and protects against premature death in a model of Alzheimer’s disease. FASEB J. 2009;23:3315.
Kapogiannis D, Mattson MP. Disrupted energy metabolism and neuronal circuit dysfunction in cognitive impairment and Alzheimer’s disease. Lancet Neurol. 2011;10:187.
Madeo F, Zimmermann A, Maiuri MC, et al. Essential role for autophagy in life span extension J Clin Invest. 2015;125:85.
Mattson MP. Energy intake and exercise as determinants of brain health and vulnerability to injury and disease. Cell Metab. 2012;16:706.
Pehar M, O’Riordan KJ, Burns-Cusato M, et al. Altered longevity-assurance activity of p53:p44 in the mouse causes memory loss, neurodegeneration and premature death. Aging Cell. 2010;9:174.
Pehar M, Puglielli, L. Molecular and cellular mechanisms linking aging to cognitive decline and Alzheimer’s disease. In: Perloft JW, Wong AH, eds. Cell Aging. New York: Nova Science Publishers Inc; 2012:153.
Peng Y, Shapiro SL, Banduseela VC, et al. Increased transport of acetyl-CoA into the endoplasmic reticulum causes a progeria-like phenotype. Aging Cell. 2018;17:e12820.
Puglielli L. Aging of the brain, neurotrophin signaling, and Alzheimer’s disease: is IGF1-R the common culprit? Neurobiol Aging. 2008;29:795.
Chapter
Cognitive Changes in Normal and Pathologic Aging
Bonnie C. Sachs, Brenna Cholerton, Suzanne Craft
“Age does not depend upon years, but upon temperament and health. Some men are born old, and some never grow so.”
“A man is as old as his arteries.”
Tyron Edwards
Thomas Sydenham
The dogma that aging brings inevitable cognitive decline is being challenged by studies of the rapidly expanding oldest segment of our society, adults older than 60 years. Although some aspects of cognition are affected by aging, many changes in cognition previously considered the unavoidable consequence of brain senescence may instead result from incremental insults on brain function associated with aging-related medical conditions. The detection of such changes, which may stabilize or even reverse with appropriate intervention, and their differentiation from the cognitive changes associated with neurodegenerative disease or other neurologic disorders is a critical task. The primary goal of this chapter is to describe changes in various cognitive abilities that occur with normal aging and with common age-related medical and neurologic conditions.
THE EFFECTS OF NORMAL AGING ON COGNITIVE FUNCTION
General Intellectual Functioning
Age-related changes in intelligence are extremely variable, with notable interindividual differences. Overall, studies of aging have consistently shown that crystallized abilities (acquired knowledge and skills gained from experience) remain relatively intact with aging, while fluid intelligence, which involves flexible reasoning and problem-solving approaches, declines. This general pattern has been documented in both cross-sectional and longitudinal research designs. More specifically, it has been theorized that reductions in speed of information processing may account for many of the age-related changes noted in fluid intelligence. Below, we review the literature on the effects of normal aging on specific cognitive functions, summarized in Table 57-1.
TABLE 57-1 ■ COGNITIVE EFFECTS OF NORMAL AGING
Attention
Attention involves the ability to attend to, or focus on, one or more pieces of information long enough to register and make meaningful use of the data.
Attention requires both simple and complex immediate processing and provides a foundation for working memory and other cognitive functions. Sustained attention, or vigilance, entails attending to one type of information over a period of time. After controlling for reaction time and sensory changes, sustained attention and strategies for maintaining vigilance do not appear to change significantly with age. Divided attention, or the ability to concentrate on more than one piece of information at a time, may decline with age, although research in this area has produced mixed results.
Increased distractibility (difficulty blocking out irrelevant stimuli), decreased use of effective strategies, and reduced processing speed may be responsible for some of the noted declines in divided attention. Pronounced
impairment of attention is not typical of normal aging, however, and a complete evaluation of medical and psychosocial issues is warranted for individuals who demonstrate such changes. Attention can be negatively impacted by perceptual or sensory changes, illness, chronic pain, certain medications, and psychological disturbance (in particular, depression and anxiety), all of which are common in an older population. As the ability to effectively attend is a requisite for nearly all other cognitive functions, it is important to identify the cause/s of attentional impairment whenever possible and to implement any changes in medications or treatment that may help to resolve these problems.
Learning Objectives
Describe the expected effects of aging on a variety of cognitive functions.
Identify age-related medical conditions that can negatively impact cognition.
Describe the key clinical features of Alzheimer disease (AD) dementia, as well as current primary tools used for diagnosis.
Key Clinical Points
Normal aging is associated with mild changes in circumscribed aspects of cognition; these changes are much less significant than cognitive effects associated with age-related medical conditions and neurodegenerative disease.
Physical disease is often linked with adverse cognitive consequences and increased risk for neurodegenerative disease; effective management of common age-related conditions may thus reduce the negative impact on cognition that is often considered an unavoidable outcome of aging.
Late-onset depression in older adults may be a symptom of prodromal neurodegenerative disease.
Describe the differences and similarities between AD dementia and other neurodegenerative diseases.
Certain medications can lead to adverse cognitive effects and even increased risk for dementia in older adults.
Alzheimer disease dementia is the eventual clinical manifestation of underlying pathology that is often present for years or
decades before symptoms are noticed; thus, careful history- taking and new clinical and research tools may aid in early diagnosis.
Non-Alzheimer neurodegenerative diseases typically manifest with distinct cognitive and behavioral symptoms early in the disease process, although differentiation becomes more difficult with advancing dementia.
Executive Functions
Executive functions include the ability to control, inhibit, and direct behavior, make meaningful inferences and appropriate judgments, plan and carry out tasks, manipulate multiple pieces of information at one time (working memory), complete complex motor sequences, and solve abstract and complex problems. Neuropsychological test performance on executive tasks declines slightly with age, and several current theories posit that deficits in working memory and executive function underlie many age-related changes in cognition. Neurocognitive tasks that require response inhibition, such as the Wisconsin Card Sort Test, Stroop Color Word Test, the Go-No- Go Task, and Brown–Peterson Distractor Test, may be affected.
Alternatively, many have suggested that a reduction in cognitive processing speed rather than executive function per se may be at least in part responsible for decreased performance on executive tasks. It should be noted that changes in the executive system that occur with normal aging are much less severe than the deficits associated with dysexecutive syndromes, including those caused by stroke, heavy and prolonged alcohol use, head injury, and some neurodegenerative diseases. In fact, successful aging appears to produce little impact on “real-world” executive functions requiring planning and executing multiple tasks. Thus, it is important to assess an individual’s actual functional abilities in addition to performance on neuropsychological tests of executive function.
Memory
Memory changes are perhaps the most common cognitive complaints reported by older adults; reports of occasionally “walking into a room and forgetting why” are almost ubiquitous. Patients often wonder if their subjective concerns reflect normal age-related changes or some pathologic condition. For patients with a family history of Alzheimer disease (AD) or other forms of dementia, even minor memory failings can cause significant anxiety. One of the difficulties in answering such questions lies in the complex nature of the memory process. Different processes are invoked when learning new information (declarative memory), recalling prior life events (remote memory), recalling general knowledge not tied to a specific event (semantic memory), and remembering procedures for performing tasks such as riding a bicycle (procedural memory). In addition, some conditions result in modality-specific deficits, differentially affecting verbal or visual memory.
A number of models describe the different stages or processes involved in forming and recalling memories. One example, the modal model, describes memory processes in terms of sensory memory, short-term (working) memory, and long-term memory. First, when a patient senses and attends to a given stimulus, a large amount of information is briefly held in sensory memory. Information is then rehearsed or manipulated in short-term or working memory. Although many factors are involved in determining what information is transferred to long-term storage, sufficient rehearsal is a common requirement for successful transfer. Thus, it is clear that when a patient complains of memory changes, additional information is required to make sense of the problem.
Although it is true that some older adults continue to demonstrate memory performances comparable to young adults, on average even healthy older adults do show changes in some aspects of memory. For example, when a large group of healthy, nondemented older subjects were followed over a 7- year period, a general memory factor showed significant decline with time.
Other studies have attempted to describe which aspects of memory change with healthy aging. In general, older adults without significant illness demonstrate increased difficulty learning new information compared to younger cohorts. When older adults are given repeated chances to practice learning new information, they demonstrate a slower learning curve and a lower total amount learned.
Although healthy older adults may retain slightly less information over time than younger adults, this effect is less pronounced than the slowed learning rate. For delayed memory tests, patients are generally asked to recall information 15 to 30 minutes after the initial presentation of the material. Although patients recall less information at the delay with age, the proportion of the information that they initially learned generally remains stable. In general, longitudinal studies of aging show only small declines in delayed memory with age, particularly on tests of visual memory. Some older adults also appear less likely to use cognitive strategies (eg, clustering information by category) to aid memory recall than younger subjects. This may be due to generational differences in learning style. However, it may be significant because the use of memory strategies reduces the age effect observed on free recall tests.
A number of memory processes do not appear to change with the typical aging process. Remote memory, or recall of events that occurred in the distant past, remains relatively intact, as does sensory memory. In addition, while older patients often have medical problems that limit physical movement, procedural memory appears to be unaffected by healthy aging.
Lastly, semantic memory, such as vocabulary knowledge and general information about the world, remains largely unchanged by aging until very late in life.
Longitudinal studies have consistently shown that as groups age, the variability in cognitive performance increases. Overall, studies of healthy aging suggest that there are some statistically significant declines in memory in late life. However, the memory functions of patients who age successfully are typically adequate for the demands of independent living.
Language
Language abilities incorporate multiple levels of processing, and general language functions tend to remain relatively stable with increasing age. Some linguistic abilities, however, particularly those involving language output, show reliable declines in older adults. As with other cognitive functions, there are multiple potential intervening factors, including trauma, illness, and sensory disruption, that may lead to more severe changes in the language functions.
Language comprehension involves discerning the simple and complex rules of language and incorporating both visual and auditory information into
a meaningful concept, and is generally associated with few age-related impairments. While hearing loss, which is common in older adults, does not affect language per se, it can impact the ability to successfully perceive spoken language and can mimic difficulties with true language comprehension. Therefore, this should be assessed and ruled out as a contributing factor in older adults complaining of comprehension difficulties.
The ability to recognize basic word structure and word representation is typically measured using “lexical-decision” tasks (in which letters are rapidly presented and the person is asked to identify whether or not it is a word) and simple word reading tasks. While some studies have suggested an inverse relationship between performance on these tasks and age, it is generally believed that such changes are the result of decreased reaction time and processing speed rather than the ability to comprehend word structure and meaning. In addition, there is some indication that the level of lexical processing changes slightly with age, in that older adults tend to rely more on word recognition than do younger adults, while ignoring other factors such as word length. Phonological understanding of language does not appear to change significantly with age, although hearing loss may appear to reduce auditory comprehension. Overall, it is generally accepted that language comprehension remains relatively intact throughout the lifespan.
Basic syntactic abilities also do not appear to change significantly with advancing age, although minor repetitions, longer pauses, and an increased use of pronouns and other vague words while speaking have been noted.
Additionally, a recent longitudinal study suggests a decline in spoken grammatical complexity during the eighth decade of life. The authors note, however, that there is high interindividual variability throughout the lifespan in terms of syntactic aptitude.
Semantic abilities require aptitude with naming and retrieval of long- stored information. There is a steady increase in vocabulary knowledge throughout middle-adulthood, and such knowledge typically remains stable in the later years. A frequent complaint from older adults, however, involves the “tip-of-the-tongue” phenomenon, in which there is a notable struggle with spontaneous word finding. In contrast to the dysnomia that often accompanies dementia, however, such changes appear to result primarily from difficulties retrieving rather than storing information, and thus there is usually a marked improvement when cues are given. Tasks of verbal fluency, which are most akin to demands required in fluid, conversational speech, also appear to
change somewhat with age. Multiple research findings support a decrease in semantic fluency (“name all the animals you can”), while phonemic fluency (“tell me as many words as you can that begin with the letter F”) generally remains stable. Younger adults tend to produce more words and change categories more frequently than do older adults on semantic fluency tasks, while older adults may generate the same amount of words but more “clusters” on tasks of phonemic fluency. Thus, older adults likely rely more on structural word knowledge than on word meaning.
Visuospatial Skills
Visuospatial skills are commonly tested by constructional tasks in which patients are asked to draw figures or assemble objects. In general, as patients age, they become slower at completing visuospatial tasks. However, as noted previously, one of the more consistent findings in the field is that normal aging is associated with general slowing of psychomotor and cognitive speed. Therefore, performance on tests of visuospatial functioning is often confounded by generalized slowing. Some studies have attempted to separate the effects of the two domains. For instance, after controlling for processing speed and executive functioning, the effects of age on the commonly used Wechsler Block Design test were dramatically reduced. Similarly, an 11-year follow-up of older adult subjects analyzed both speed and quality of performance (errors) on a parallelogram test. As expected, speed declined with age, but the quality of the performance actually improved significantly.
This body of literature suggests that declining speed contributes to some of the findings that report visuospatial processing deficits in normal aging.
Speed does not appear to account for all of the visuospatial changes observed in healthy aging, however. Mental rotations of objects or spatial coordinates, accurate copy of complex geometric designs, and mental assembly of objects typically worsen with age even when unlimited time is allowed to perform such tasks. Furthermore, when speed is included in scoring, some studies have reported disproportionate slowing on visuospatial tasks compared to verbal tasks. Overall, some studies may exaggerate the visuospatial decline observed in normal aging because of the role speed plays in many tasks used to assess visuospatial function.
However, abstract spatial abilities may decline with age, even when speed is controlled.
Psychomotor Functions
Age-associated slowing in reaction time, related to both a general reduction in the speed of cognitive processing and to changes in peripheral motor skills, has been consistently reported. Age-related declines in brain dopamine activity and periventricular white matter changes may be associated with reduced cognitive speed and basic motor functions. As a result, performance on tests requiring speed and quick reaction to stimuli are likely to decline. As previously noted, increased psychomotor speed and reaction time are believed to underlie many of the age-related changes noted on neurocognitive testing, particularly tasks involving perceptual speed, attention, and working memory. In addition, changes in psychomotor functions can be associated with changes in real-world tasks, such as driving. As a result, it is important to monitor the manner in which physical changes are impacting an individual’s level of safety in performing daily activities.
COGNITIVE EFFECTS OF COMMON AGE-RELATED MEDICAL CONDITIONS
In the following section, cognitive symptoms associated with common diseases affecting older adults are reviewed. Prevalence of common medical conditions that may impact cognition in older adults are presented in Figure 57-1. A summary of cognitive symptoms associated with common medical conditions is presented in Table 57-2.
TABLE 57-2 ■ POSSIBLE COGNITIVE EFFECTS OF COMMON AGE-RELATED MEDICAL CONDITIONS
PJ.KKOlflכroll
Yl�IJl)ll'ו'.Jlti
�
Oוilr ll.";t]!I N!וll
1\1.נ.. .\\וV Lיט
Vl::ll\lגl iNA!•
l!il�lt.נl
�-
�� \'ftNI
\'ilil"'� il.il
11;./1\11�
M-liu.!'יlillנ
ןrutaחו,בN rtJN�-
., "'זי "'""'°"
"'11,Lkי '}' 1J יייון-י!סl
;ll'm ,lylrl וlוo
dl•g � 1111'1
,., 1�, 1 lגת&''י""'"
-י"'>בlי'"l""י
וי!w ד..-נ�.
f ן) r
מ1 ,,,.1,ן,
-
•�rdlwי r� I
,1:ג1,..,., �11"
i!יזSוtl!t'K'
וbli<
l יt:
lkl,U.., ,«iu�mr
li«lln..-.ulf Olllliוiןilllol
Olb<"ז<Acllיי
ןh,d,,Jiנ,[f,
ןrוiumfi1 1..ג � rא::tktfו m
ו oו,g
w.calנllliq ,1ן ·�
l l\tjlmJ
dKflnr llו,uו
"'·· d.וuiוtfL<
1f�י>-.:i-lי.יCur
[ d שז,��מ1111:Uת
o!f ד
.J.pn,.>i<י"1.
גfiXhL\J
L�I
"""'
1ז.
�\ 1
1יuגג1..,,,ן
111\'fנWq' iln-
k •י•�
..-..i-1l1יי
wfזtiמ'Yliיrt-
-t- -.- ..
�..וo,-.,.ן 'Vql.,bl, 'ינתsli;n\' :.anl
�ןr,,-.ןוו,_, du� nd"i
�llj'!,,.o/1
וn ••'f'ill"d
\'J1ו1ע1.l,א •
,.גt;..וili'(�, יייי1111
,'"1 Ll11i,livi1
l!lr,!u,·,.J
...ע.'1.t,יrיk"tת flrןוזר.
נl<f><fווl jkt,;j
� pו>ןדיwח
�-,;פn
�ןש
lt)..,fkt'Lohln
l-'IJM' ldlנb<,
-81,
·k1111c
iנ;ג lmp<ו_>1 /,[,;ן
"'11וןrם111111 ז•
l י'""rנזו,Mט INIJIII'
Lrטז.rl,-, םג!'"'
mוז�priן-g1..hl
� 41\"f1{י1'יו:סי,'(11
(11,11 •nt ו�u (וhoיr
6111,ג!\;t rc
יk>יגי ol"1!Cיוג
.tl!. VנlJ
.tח.,Y•Q.111)
ertnlנתd
,,,..ןן,
�ו,..נtn1
וm� Jd"i1..1b 111
.:םbן\a\1•י-
..t.ill!V-"'•
�- l
liil olוilir.i"' 11
� Uוij נiocW
tזזf l'ייn
11.,,k,d
1
ייי-י
Jr Qtl,/,;י
11.,.t:נud
י-",ו,iu
l�
O)Urll
nוח .,
lתון>נ.Jm חו<el
ו
111-1111
n '"יי�:זי""•
ו
.. 11
,,ן;fו:JJ!w,...,
1 lp,גתוl,�,חmן
f(\>J,OOlql
oוrr.נ"'-1ו,;cf1
� שו( ,11
lkdut...! 1,ubנL M-יז k:.d
iJv<Bq" 11��ml
VISILI\ "יו'
וע�ון aiiנl
1 ""'י1'Lנileli<ינL
..,,.,rohfr
1 "'' �-odlld
,!Jo!xrn
1ו1<> IQ,ו>< Vנ,ulיג,,. ותמ<-<
�"'�
"'l"I\M� l
ll)l"'r�
f,.jןפ::,_,t
m,;iw�Ulltlי.
�\�
Rldי-יי1M
l "��ח- ו, .
•rזחL <11:)w1תנ;
נl,I,,.uיt
יי�י
� m�ידז
(כj.,..\C
ו!...ך�
rt.-wיי.-ו
� bז spr•ח "f"I
wkl,m..כ
, r-ru.rf- �דal lnn
;-sו..
ו,ן\lו, י"י Pf)
...
וJוע d.bt<J
�
lmf'llml l,ן
.ו.cו'llוrto«
�ו>J<יof
�
1 11�.,.!,
.נdז• ד I 'ף""'יtי ו,....
j םt d� דwlJna, וו.J�
(ו,ן�,אl!(ין'
ll}w�lש,1
11111 alr
r ,'?'1:1"
.ג....נ..,.
�,, r••ifv
,1-
V..,.1ייf"'וLt יי 1IDJ'.נזזו<.ntו lr,n:xt,i-uי nr
י--;::
p-b.,נ,.נ....,�.....�,1 1--..נ-lqי,,יt.-�י�l;
v � 1;(-11
-�
�....1•
fillיו;נ;llוtז,ll\T
.-ייlו m ום l}
,קי,��,/יjוג
)11'�,.
, �� lli(תl\!?
�r.,_,
llייםi� \-� ש<I
,וןhldו.-- 1 :תl"11<
1', _ ןgו}'!\1\�.. 111,
�lזן\JIJ\n ��·
1lrvייr rסם:,..
מ.ג \\i[I
V,,t,..,נ .ו
� •lmi-n. ,.
A[� l.'.ill | \ rlt :lו1נ | � | ו | �,w | ll �1יאhב.:וןי | R�!I<� |
\'11 l וNוnro | lll(זן! !11 | MII�� | lזQ;.lr,ז,J� | 1- � | rp;ז -11imr |
וt,[klt&
oi,llfil<
llcib-.J
lhז lkvi
lון'ןl-\jJmtnוl
וכ1"'"
lwOOJi."1
,mp,,f
l
W,w, ון'J).all
1 IH'J')'1 fL.'11ןttn9: �,זl
•1io"-'>.l"ll•II
•"יו-!l!¾oot
J_
,,ג-ז'יf' 1 1ו.ז r1rז
•ג.ייזיתlמ,i;.b..ו
ן�•lו;,wd)JI
0.:ן>tי<:
'וו'ן)&i8יזו:וי'81'
ו:דlJ<H!!'{\,יי(
frll:.!!il'lוt)'
, rU(<ll<ו oJ;י,ף>
.Lb.rי1oי1o11111 ו.1,
tוl,
;:י:וג,
וl,f!Ql!I\CJ
,'י"lוגro ן:qיטrl>'י"'יי
W]!\11,ווli!!!lווl-יו•
clf<יו:t<"t"י",
יtי>וזוmfl',.
.וtjd lוr.ווllוlו L;
..,[?H>l --------------1----,-----------,------.------t-----
lk(� 111
A.lכ-
•ו1'�r
Vווr� וs1וi1וlrזי,, ןנpp,ז '11,11rkn ו1r
י,\uwl,k: v-.u.
1.1וji (,ן&יו�!1ןdן 1
rap,;,, nutd- �., ויוp1
r .וv \(11ןcי ו �
�
.1... 111
וון»� חul ,ןחi;l
,1lנ(1\יי
-.,-•,l)l,k
י'נ"!'וl ulJI
�טr-o nrtתdז"ll
ן,,tןיןן,
וגlיk'l'זr
!lf")n iו ul
�r.,.
ר'i'lזwbk-
'fזוyrolll �tבf lותprו!'י' wh AI) R.ו,�lו \'afl bl� j� ��.i,מ'lt.iWוl IWIIU,,I\· וו.:, �ו1,1,J Dqf.&Hin
.IJ-,.: ,hןיזזנl,;נ•l"f>4ח>mז, 1kfוdוl1, l'11י11<:ו <ful>l!R "°'911 'l"י" 1fu�ion r<:K'l\<>,ת llmo.
ן]L!tיMWו•;ןנllll> �"lי:"IIMJח נ,י<יlמ8!עםwf1!18
1 l'f!Qrדl 111,li,1,lln,,
-ש.
נים.
.�נn-
rנ.l1"JJl;.&.1Kיl
FIGURE 57-1. Prevalence of common medical conditions associated with cognitive impairment among US adults aged 65 and older. Data for most diseases obtained from cdc.gov. (Data from Diab N, Daya NR, Juraschek SP, et al. Prevalence and risk factors of thyroid dysfunction in older adults in the community. Sci Rep. 2019;9[1]:13156.)
Cardiovascular Disease
Cardiovascular disease increases risk for developing both debilitating cognitive decline due to large vessel stroke and/or vascular dementia (VaD) and milder cognitive deficits. While much of the cognitive dysfunction associated with cardiovascular disease can be discussed in terms of its effects on the cerebrovasculature and resulting vascular cognitive impairment (VCI), here we focus on independent cardiovascular disease–associated risk factors for cognitive decline in older adults, including the effects of coronary artery disease, cardiac surgery, and hypertension.
Coronary artery disease Subtle impairments in cognition can be seen early in the disease, including among patients diagnosed at midlife. Specific impairments have been reported across measures of reasoning and other executive deficits, verbal memory, vocabulary, semantic and phonemic fluency, visuospatial function, and global cognition. Increased length and greater severity of coronary artery disease are consistently associated with reduced cognitive performance even in the absence of strokes or obvious cerebrovascular disease. Indeed, among older adults without a history of stroke enrolled in the Cardiovascular Health Study, there was a greater than 60% prevalence of cognitive impairment. Factors associated with cognitive impairment among patients with coronary artery disease include cardiac surgery, presence of an apolipoprotein E (APOE) ε4 allele, presence and degree of heart failure, use of concurrent anticholinergic medications, hormone disruptions (eg, thyroid), and elevated inflammatory biomarkers (eg, interleukin-6, C-reactive protein, brain-derived neurotrophic factor).
Cardiac surgery In addition to the risk of immediate postoperative delirium following cardiac surgery, postoperative cognitive decline (POCD) after cardiac surgery is reported in 30% to 70% of patients at the time of hospital discharge. Factors that increase the risk of POCD include older age, cerebrovascular disease, underlying neurodegenerative disease, cardiopulmonary bypass time, manipulation of the ascending aorta, and cerebral hyperthermia. A wide range of cognitive deficits, including problems with attention and concentration, processing speed, memory, and visuospatial function, have been noted in patients immediately following surgery. Several reports suggest that postoperative cognitive function stabilizes or even improves after a period of approximately 12 months in those patients who demonstrate initial decline. However, the prevalence of cognitive dysfunction remains relatively high at 12-month follow-up (15– 25%), and longitudinal data suggest that these patients are at higher risk for continued cognitive deterioration and multiple dementia types (including AD, VaD, and mixed dementias). Post-CABG neurocognitive status is related to overall quality of life, and current recommendations underscore the importance of closely monitoring cognitive status in the years following cardiac surgery.
Hypertension Essential hypertension is associated with risk of cognitive impairment independent of secondary disease or organ damage. Uncontrolled hypertension may impact cognition and increase dementia risk via several
mechanisms, including arterial stiffening, inflammation in the blood vessels, disruption of the blood–brain barrier, development of subcortical white matter lesions, adverse impacts on cerebral blood flow, and disrupted energy substrate delivery. Potential cognitive effects of primary hypertension include reductions in mental status, slowed reaction time, reduced attention and vigilance, weakened executive function, poor verbal fluency, and impaired visual organization and construction. Memory functions, including spatial recall, verbal recall, and word recognition, may also be affected in some hypertensive patients. Beyond effects on specific cognitive functions, hypertension, particularly at mid-life, is a significant risk factor for later dementia. Thus hypertension has been identified as one of the major modifiable risk factors for dementia. Treating hypertension with medication and/or lifestyle changes may reduce dementia risk, although evidence regarding the use of specific antihypertensives to prevent later cognitive impairment is mixed. Interestingly, blood pressure often declines in the period immediately preceding the onset of AD, and it has been suggested that low blood pressure in older adults may compromise brain function as a result of hypoperfusion. In addition, sudden aggressive lowering of blood pressure in older adults with long-term hypertension may interact with their chronically upregulated cerebral vascular resistance and induce cerebral hypoperfusion. Thus careful monitoring of blood pressure and gradual titration of medication regimens is of particular importance. Additional interest has been focused on the question of whether particular treatment methods may confer particular protective effects on cognition. Additional controlled randomized trials are needed to answer this question.
Type 2 Diabetes Mellitus
Much attention has been given to the rampant epidemic of type 2 diabetes mellitus (T2DM) in older adults, a trend thought to be largely attributable to obesity and physical inactivity. Current prevalence estimates in the United States suggest that between 20% and 30% of adults older than 65 years are afflicted with T2DM. The negative impact of T2DM on multiple medical systems is well known. The clear impact of T2DM on cognitive function in older adults is less-widely known, but accruing evidence demonstrates that these patients show pronounced impairment in attention and verbal memory when compared to healthy age-matched adults and accelerated cognitive decline over time. Complex attentional impairment is most common,
involving the inability to handle multiple streams of information or attend to information in the face of competing stimuli. Verbal memory deficits typically affect the ability to encode new verbal information. The magnitude of such impairment may vary from subtle subjective complaints to pronounced impairment that interferes with daily activities and may interfere with the patient’s ability to adhere to complex treatment regimens. The mechanisms causing attentional and memory impairments are likely multifactorial and include vascular factors, as described earlier, in addition to the potential negative effects of hyperglycemia and glucose toxicity thought to cause oxidative injury. Notably, successful treatment of T2DM has been shown to improve cognitive function. Recent work also suggests that insulin resistance, independent of hyperglycemia, may have negative consequences on brain systems mediating memory and attention, suggesting that therapeutic strategies focused on improving insulin sensitivity may be preferable to those focused on augmenting insulin levels. The importance of treating and preferably preventing T2DM has been underscored by its associated risk for various forms of neurodegenerative disease, including AD, VaD, and Parkinson disease (PD).
Chronic Kidney Disease
Cognitive impairment among patients with chronic kidney disease (CKD) is quite common, with prevalence estimates ranging from 10% to 40% depending upon length and severity of disease. CKD patients perform more poorly than controls on tests of orientation, attention, abstract reasoning and concept formation, executive function, memory, language, and global cognition. Such changes can occur early in the disease, but progress at different rates depending on the domain assessed. For example, language may continue to decline, while other domains remain stable for long periods of time. The underlying cause of cognitive impairments in CKD patients is not entirely understood currently. Cerebrovascular disease is a common comorbidity of CKD and thus likely to underlie some of the cognitive impairments associated with the disease. However, as cognition often improves or stabilizes following dialysis and/or renal transplantation, underlying vascular disease is most likely not the sole culprit. Rather, it has been suggested that dialysis-related factors (hemodynamic instability, cerebromicrobleeds), uremic metabolites, anemia, and depression may all potentially impact cognitive function among those with CKD.
Chronic Obstructive Pulmonary Disease
Emphysema and chronic bronchitis obstruct airflow, resulting in hypoxemia and hypercapnia. Cognitive dysfunction is commonly observed in chronic obstructive pulmonary disease (COPD), although the specific skills affected appear to be broad and diffuse. Deficits in verbal and visual memory, attention, abstraction, psychomotor speed, information processing speed, and global cognition have all been reported. These changes in cognition appear to be due to hypoxemia. The decrease in arterial oxygen partial pressure correlates with neuropsychological impairments, and most studies indicate that oxygen therapy results in modest improvements in cognition. However, a diagnosis of COPD, especially at midlife, does increase risk for later life mild cognitive impairment (MCI) and dementia.
Obstructive Sleep Apnea
The prevalence of obstructive sleep apnea (OSA) increases in geriatric populations. Cognitive changes in OSA can be diverse but may include reduced performance on measures of global cognition, attention, concentration, processing speed, working memory, executive functioning, and verbal and visual learning and memory. Interestingly, the association between cognitive impairment and OSA is stronger in younger and middle-aged adults. In older adults, OSA may have weaker or no association with specific cognitive impairments; however, there is increased risk over time for MCI, AD, and VaD. Cognitive deficits may be related to severity of hypoxia, hypersomnolence, or comorbid conditions. In addition, recent evidence suggests a possible association between OSA and presence of AD biomarkers. Continuous positive airway pressure treatment may improve aspects of cognitive functioning in some patients.
Nutritional Deficiency
Older adults are at considerable risk for nutritional deficiencies as a consequence of poor diet and malabsorption syndromes. Much research in this area has focused on deficiencies of B vitamins, antioxidants, and vitamin D.
Lower B12 levels and folate are associated with reduced cognition and increased risk of AD, although the evidence is mixed. B vitamins play an important role in homocysteine metabolism. Homocysteine is an independent
risk factor for cerebrovascular and cardiovascular disease. In patients with
both AD and VaD, elevated plasma homocysteine levels have been reported, and recent studies suggest that homocysteine levels are related to cognitive function in normal aging. Reduced performance on tests of mental status, nonverbal pattern abstraction, construction, and processing speed are reported in patients with high plasma homocysteine. Given that plasma homocysteine and folate levels are inversely related, it is conceivable that such cognitive deficits are related to reduced folate rather than to increased homocysteine per se. Recent data suggest, however, that homocysteine increases the risk for cognitive decline independent of both folate levels and other vascular risk factors. In addition, although supplementation of vitamin B is shown to reduce homocysteine levels in older adults, concurrent benefits on cognition have not been widely reported. Nonetheless, among older adults with cognitive deficits, routine laboratories and repletion of B vitamins are recommended when applicable.
There has been much debate about the potential protective effects of antioxidants in dementia. For example, both deleterious and protective effects of vitamin E have been reported. Overall there is little evidence that vitamin E supplementation reduces risk for cognitive decline. A large-scale clinical trial, however, showed reduced functional decline with high-dose vitamin E supplementation (2000 IU per day) for mild-to-moderate AD. Although these results appear promising, additional work is needed to clarify the risk-to-benefit ratio for vitamin E supplementation.
More recently, concerns about vitamin D deficiency have been raised in relation to increased risk for a number of negative health outcomes, including cognitive decline and dementia. Meta-analyses indicate associations between vitamin D deficiency and worse performance on tests measuring global cognition, visuospatial abilities, processing speed, and attention, while memory skills appear to be less affected. Unfortunately, vitamin D supplementation has not been strongly associated with improved cognition in older adults to date.
Thyroid Disease
Both hypo- and hyperthyroidism are associated with potential adverse cognitive changes, particularly in younger and middle-aged adults.
Hypothyroidism may increase risk for dementia, as well as result in specific deficits in visuospatial skills, psychomotor speed, and memory. There is some evidence that the memory deficits associated with hypothyroid
conditions are related to retrieval rather than immediate recall or learning; thus, disproportionately better performance may be observed on tests of cued or recognition memory than on free recall tests. Thyroid replacement therapy frequently improves cognitive functioning, although skills may not return to baseline levels in some patients. Hyperthyroid may also increase dementia risk, and related cognitive symptoms may include deficits in attention, executive functioning, and memory. Interestingly, dementia risk from hyper- and hypothyroid conditions does not appear to extend to adults in the older age ranges (eg, 75 and older). There is also some evidence for sex differences, such that a stronger relationship between thyroid disease and dementia risk in women as compared to men has been reported. Overall, however, thyroid hormone abnormalities occur in a large proportion of patients with dementia and thus should be routinely monitored as a treatable contributor to cognitive decline.
Depression
Depression is an increasingly common problem in older adults, with prevalence estimates ranging from 11% to 30%. Situational risk factors for depression in older age include the loss of social support, death of family members and close friends, changing social roles, illness, and physical limitations. Depressive symptoms, including lack of initiation, impaired executive function, cognitive slowing, poor attention and concentration, and mild memory impairment, can mimic early signs of dementia and may potentially lead to misdiagnosis and/or lack of appropriate medical intervention. As depression is a potentially reversible cause of cognitive impairment, the differential diagnosis between depression and dementia is vital when evaluating older patients. Factors that are useful in discriminating between depression and dementia include the clinical course of symptoms, relationship to a specific crisis or stressful event, history of previous psychiatric problems, quality of effort, and level of impaired processing on cognitive evaluation.
Late-onset depression is also an independent risk factor for the development of neurodegenerative diseases, including AD, VaD, and Lewy body disease (LBD). Depressive symptoms may thus represent a preclinical phase of progressive dementia in some patients. In patients with early cognitive loss, depressive symptoms may represent realistic self-evaluation of such decline and/or may coincide with actual changes in
neuroendocrinologic status. In the absence of predisposing situational factors, late-onset affective disorders are quite rare. Thus, cognitive ability and independent functional status must be carefully evaluated in older patients, and an in-depth qualitative analysis of depressive symptoms may be useful. For example, major depressive disorder is associated with a range of affective, cognitive, and vegetative symptoms, while depressive symptoms in early dementia are more likely to include cognitive and motivational symptoms (eg, poor concentration, lack of initiation) in the relative absence of central affective disturbance. Continued monitoring of the cognitive status of depressed older patients is essential in order to rule out progressive cognitive decline.
Medications
A number of medications have the potential to cause both subtle cognitive changes and alterations in overall mental status, particularly in older adults. Medications known to adversely affect cognitive status include opiates and opioid-like analgesics, benzodiazepines, anticonvulsants, antipsychotic and antidepressant medications, antiparkinsonian agents, central nervous system stimulants, antihistamines and decongestants, and certain cardiovascular medications. Anticholinergic medications, used to treat a variety of conditions including insomnia, bladder spasms, gastrointestinal disorders, dizziness, and others, have been linked to an increased risk for dementia in older adults in a large-scale prospective community-based study. Despite this finding, these drugs are widely prescribed in older adult populations. A variety of other medications, particularly those that readily cross the blood– brain barrier, may impact cognitive function in certain individuals. In addition, drug interactions in older adults can lead to more serious physiologic and cognitive consequences than those observed in younger adults, and adverse medication interactions are more likely to occur in an older population. As a result, changes in cognitive status must be carefully evaluated in light of a patient’s current medication profile.
Delirium
Older adults are at a significantly increased risk for developing delirium, particularly following surgery or in response to medication changes or interactions. Delirium is a reversible condition that can be distinguished from most neurodegenerative diseases by a rapid onset of symptoms that
include significant disorientation and disturbance in consciousness, reduced awareness of the environment, and attention deficits. While recent memory is generally impaired, altered consciousness is the primary indicator of delirium. Hallucinations, delusional thinking, and other disturbed thought processes may also be present. Delirium usually resolves quickly, but may persist for several weeks. It should also be noted that postoperative delirium may signify the presence of a beginning dementia. A primary concern is to identify and treat the underlying cause while providing a supportive and nonthreatening environment for the patient.
NEURODEGENERATIVE DISEASE
The following section reviews the cognitive and behavioral profiles associated with common neurodegenerative disorders (Table 57-3). Early identification of such disorders, many of which are diagnosed solely on these parameters, has become increasingly important due to potential therapies that may delay disease progression.
TABLE 57-3 ■ EARLY COGNITIVE SYMPTOMS ASSOCIATED WITH DIFFERENT DEMENTIA TYPES
·-lllll•ikiP�יiiי·יttlllו!l•tי'י·'II·-·--·
ALZHEIMER DISEASE (ADI
lווsidious.
Sוeady,
Signiricaווt1y N y y
l1ןוac( pri-
�-t;ldly
S�1וןa1רtic
Siזnp(e ooנ\•
lfprc.seווt,
Deprcssio11
gr,duoנ
ו iנווpaired
1ו1aryspu.1ו; 111,poired
orgnווi-
StNCtJon
syוווpt·oוווsare
וco1nגno1ו
LEWY BODY DISEASES ו
dcclarnti,·e
,יernll
iותpcוired Mבleclivcf dividod allenlioת
1vorking
ודוenוoויy,
rcspoווsc
inhibitioוו, gcווcr.11prub- lt:ווו solviו\Q
z:גfional
.וbililiC.$ sig1וilו- rnnlly irוןpaired, ו\וlld
anonוia
intac;k,coוזו· plו::x viiiiual
reosoווing
impaired
mild
PDIPDD
lrו:!iidiow.
Variod
Possiblc ו Variablc ו'
ו
dcficilS
/vlay bc
deficits: due
Difficully
lחtact priוווary n1ren11on
.sp.111,
iוnpaired stlectiv,J dividod הttenזion;
tוucוuatioו1יז
Signiflcant חuclu3• tiOח$ i11
atlen(ion
pla11niוןg/
/vlay 1נ<
Vcrbal llueנוcy, חרech•ntcol aspects
of speech
lmpגlrו!d
Variable; ח1וtr1cy נווaybc- inוpהired
iוורpaired
Rc:.י-.iti11gtrי1;:111ur, Depressiט11 brad)'ki11csi.i, conו111on.
ו
tolmpaוre<t
auto1חiltiorו
ו
MlfUngsoו
rJgidlוy, pos- turnl iזוslabllity, slוuftling gai11 lmpoוזC<I pro-
mגy cxtוibit l1atlucina• tlons/d•l11-
D�B
lns:ldlo(ו.s
Stcody,{,uolly los,
gr.1du.i11 ;11ון>airt:c.l
thOlוAD
Vorioble y
y
\fariהblC'
lmpain,d
Ct)ו1Sli-UC:•
lion,,op)·,
vi5iוO$})i3.tial
plar111111g
and probltווו
.s,Qlviווg
ccצsiווj spccd siorוs,leיss:
coווuווoח
וhan Dl.B
Moyexhibit Hal]uci-
a ודוrוgc or 1ו.:1וin1ו!ז1
parkinsonia.n dc]us.ioוו:s.
symptonוs dcpreי.sslon
VASCULAR DE.MENTIA (V•D)
M,ybc iווs[dious orגc1.1te: | Slcp\יויisc or grodual | ו | Possib!c derוcוts | Varies | \' | y | Iחוact pri-1Sigתifו-ו:aורtl)י nןונry spa1ו: nוorc Iוווpaired lוווpהנred IJוonADoתd sele<:livc/ reli1ti\'C tn pcr• d'.vidod for1וm1וcc 011 ..וtוentton וvtזb.ו nוenו- ory וask:s | Verbal חuc11cy 1n,pa1red | 1 | Rclatively prcservod, although moy beaffcctcd by executtvc וmpaנrments | Slo,viווg, posA sible discretc nוotor prob- lcודוs dcpc11di11g on distributioוו of ,•,sculor changes | ו | Dcpre.�ion c.oוןןtןוon |
FRONTOTEMPORAL LOBAR DEGENERATION (FTLDI
BelוBYiorol vנriבnl l"l'LD (Fl'LDbv)
lnsiditוu..:;
S1eady. ropid
Jזcl111ivc.ly
prt!served ly
y \'
lווtact pri- lrזוp.גircd \'crbnl l�e.lative.ly nוory spבn; בcross a rבnge: תuoncy pr<S<ז\'Cd, impaired of exec11ti\'e irדוp:זiזיcd altho,וgh n1ay
.selcc:וivc/ fuווctioנוs. lנc-aחt::ctOO
divided bye.xeיCUlivt
allention impairnדcnls
1\1n)re.xhibit
idoבtlonבl
.זprגxia
Significniוl
b<hovioral cl1angcs, ווtaץ
lncludc beh.,,ידorזוl
di:iנווlוibi� tlo,ן,opo tlוy. l� (·וr
<וnpatl,yl symp,,Jhג•, hyper·
oralil)'
Prim.וry
progre:ssi,•e
�phגslo (PPA)
lru.i(lious
Ste,dy.
V;ןried
Preservecl ·fY y y
re,a.111 may
«orc poorly oת v�rb.il וווe1ו10ry
te:;וs
Preserved
l'n:serv�d
NQ11flue11l: poorarוic: ulגtlonי dys:1rthria1
re.la�i,tely
prcseיr,•eיd oomןנre- lןeווsioJוj
!Agop,1,1,ר arןoחןia, inןpaircd repetition: S.:rrנaוrflc::
1nןpaii·ed. cQnוprc litוו3:inrו1
n1101nia,
intatt
:ףנe(!ch rנlC!
& prosody
Jns.eוn:;intic.
de וונe זחi,.ג
גוח y havc vi:i.,1:al agno.ג�
No11jl11e,נl PPA: buccofuci.גl גpr.וxla
Ch:;111,gcs u11likely
Dementia Due to Alzheimer Disease
AD dementia, the most prevalent of the primary neurodegenerative disorders, is the clinical manifestation of underlying AD neuropathology, primarily the accumulation of β-amyloid protein and neurofibrillary tangles leading to neuronal loss and synaptic degeneration. Diagnostic criteria by the National Institutes on Aging and the Alzheimer’s Association (NIA/AA) initially put
forth in 2011 and updated in 2018 recognize AD as a continuum, with underlying neuropathologic processes often beginning 20 years or more before the onset of clinical dementia symptoms. These criteria allow use of biomarkers that reflect these processes, alongside supporting clinical information, for use in early differential diagnosis. Currently, cerebrospinal fluid (CSF), β-amyloid and phosphorylated tau, and imaging studies (amyloid or tau positron emission tomography [PET]) may be used to determine the presence of β-amyloid deposition and/or aggregated tau to provide diagnostic evidence for or against AD as the underlying pathology, and anatomic magnetic resonance imaging (MRI), fluorodeoxyglucose (FDG)- PET, and CSF total tau may be incorporated as markers of severity of neurodegeneration. Currently in development, several blood-based biomarkers may serve as useful diagnostic tools in the future, the most promising of which is plasma phosphorylated tau181, which is closely associated with CSF phosphorylated tau and tau PET, and may be useful in both staging of AD and in differentiating between AD and non-AD dementia types. In the absence of current availability of such tools, however, conventional diagnostic criteria are considered to be reasonably accurate, particularly when the evaluation is comprehensive and includes a complete medical and psychosocial history, medical evaluation, and neurocognitive testing.
The NIA/AA criteria for AD dementia require (1) that the patient meets criteria for dementia (cognitive symptoms that interfere with independently completing daily activities [managing medications or finances, driving, cooking, etc] that occur within at least two cognitive domains, represent a decline from previous function, and are not better explained by delirium or psychiatric disturbance), (2) evidence/report of an insidious onset, and (3) worsening cognition by report or observation of the patient, informant, or clinician. The initial cognitive symptoms may be amnestic or nonamnestic to account for variants that present with deficits in cognitive domains other than memory, such as posterior cortical atrophy (which presents with initial visuospatial deficits) or language-predominant forms, though amnestic presentations are by far the most common. The term “probable AD” should not be used if another medical/neurologic disease could account for the symptoms. “Possible” AD dementia may be diagnosed if the symptom history is atypical or unclear or if there is a mixed dementia picture.
Patie nt history A complete medical and psychosocial history is a vital component of any dementia assessment. Such a history should be obtained both from the patient and a reliable informant, preferably someone who has regular contact with the patient and who has an adequate opportunity to observe their daily functional abilities. Typically, a patient in the earliest stages of AD will not exhibit deficits in basic self-care activities, such as feeding or bathing. However, more complex daily tasks, including driving, managing finances, shopping, and other chores and activities, are likely to be affected. A gradually progressive course and insidious onset of cognitive symptoms is a hallmark of AD, and thus a careful history regarding the nature and timing of symptom onset and progression must be obtained. An inventory of all current medical and psychiatric concerns, family history, past major medical and mental health problems, and medications must be evaluated in order to rule out conditions that may be either causing cognitive problems or exacerbating their expression.
Medical examination An important goal of the medical evaluation is to exclude the presence of medical conditions that may be responsible for the observed cognitive deficits. It is critical to investigate potentially reversible causes of dementia such as uncontrolled liver or kidney disease, adverse reactions to medications, and delirium. Laboratory blood tests, such as complete blood count, complete metabolic panel, and B12 levels, can aid in ruling out
systemic illnesses, vitamin deficiency, or organ malfunction. Structural brain scans, such as CT or MRI, are used to evaluate major cerebrovascular events, tumors, normal pressure hydrocephalus (NPH), and other neurologic conditions, while functional brain scans (eg, PET) can identify patterns of activity that may be useful in classifying dementia type. As noted, ruling out the possible contribution of psychiatric conditions, such as major depression, is also a critical piece of this examination.
Neuropsychological assessment Neuropsychological assessment of cognitive function not only provides confirmatory evidence for cognitive impairment but may also aid in disease staging and clarification of dementia type. Given the extensive variation in rate of AD progression between patients, successive neuropsychological examinations early in the disease can also provide information regarding an individual’s rate of progression and remaining cognitive strengths, which is essential for advising patients and family about possible safety issues that arise. Perhaps one of the most
valuable assets of neuropsychological evaluation is the sensitivity of the tests to detect early cognitive decline. While there has been debate regarding the usefulness of providing an early AD diagnosis, it is generally accepted that such a diagnosis will allow the patient to avail themselves of current and emerging therapies, as well as to make decisions regarding health care, finances, and legal issues while they still have the capacity to do so. In a typical neuropsychological evaluation, patients are given tests that sample a variety of domains of cognitive function. Results are compared to normative data based on age, and often education, and also to an individual’s estimated premorbid abilities (based on educational and occupational background and performance on tests that tend to remain stable over time). Pattern analysis of test results, in combination with data from the patient history and medical evaluation, is then used to generate diagnostic possibilities. The following sections discuss cognitive impairment patterns typical in patients with AD.
Memory The hallmark of AD, and most often the first cognitive symptom of the disorder, is anterograde amnesia, evidenced by difficulty with learning and retaining new information, which is often described as “rapid forgetting.” Deficits are noted in declarative memory as a result of prominent impairment in information encoding, retrieval, and in particular, storage of new material. Patients are likely to exhibit deficits in recent episodic recall, and they or their caregivers often report they misplace items, forget recent events or conversations, and frequently repeat questions or statements. In contrast to impaired episodic recall and difficulty learning new information, procedural memory is rarely impaired, and remote memory remains relatively intact until later stages of the disease.
In order to adequately evaluate short-term memory loss and establish a pattern of impaired learning and memory storage or retention, neuropsychological evaluations include tests of both immediate and delayed verbal and visual recall. While cognitive screeners such as the Folstein Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA) assess general mental status and orientation, they are not adequate for a comprehensive understanding of memory impairment. To satisfactorily assess a person’s true memory ability, tasks that are high in cognitive demand, that exceed the primary memory span (such as story recall and list learning tasks that include at least 10 items), and that have a recognition component are recommended. On neuropsychological examination, verbal and visual free immediate and delayed recall and
recognition are significantly impaired in patients with AD relative to same- age peers, and patients typically exhibit a high number of intrusion errors and repetitions. Semantic recall is generally the most predominantly impaired, and thus verbal recall tasks are often the most sensitive to early memory loss.
Attention and Executive Function Certain aspects of attention and concentration are often impaired early in the disease process, and recent research has supported that it may in fact be one of the earliest abilities affected in AD. While patients with AD are likely to have relatively intact simple attention (eg, primary memory span), tests requiring selective, divided, and complex aspects of attention are likely to be impaired, particularly as task demands increase. This pattern of performance strongly supports the presence of a deficit in working memory (the ability to simultaneously attend, process, and respond to multiple pieces of information) early in the disease process, and a converging body of evidence supports a primary executive component in AD.
In addition to problems on tasks of complex attention and working memory, patients are likely to exhibit mild deficits in response inhibition, evidenced by intrusion errors and perseverative responses, on neuropsychological testing. Deficits in abstract reasoning, general problem- solving ability, and making appropriate judgments are also commonly noted. In assessing for judgment and abstraction, patients may be asked the meaning of proverbs (eg, “you can lead a horse to water but you can’t make it drink”) and are often given hypothetical situations in which they must decide an appropriate course of action (eg, “What would you do if you were in a crowded shopping mall and saw smoke and fire?”). On these tasks, even early AD patients may provide incorrect, concrete, or inappropriate responses.
Language Semantic processing is considered the primary language deficit in AD and is present in more than half of all AD patients at the time of diagnosis. Word finding problems are commonly reported in both AD and normal aging, but patients with AD have more severe deficits and are more likely to produce a significantly higher number of semantic paraphasias and circumlocutions. Patients are generally impaired on tasks of confrontational naming (eg, naming a visually presented item), and, unlike changes that occur with normal aging, are not likely to be assisted by phonemic cues (eg, providing the sound the word starts with). Syntactic processing, in contrast to semantic processing, is typically unaffected in mild AD. For example, patients with early AD can typically process even complex sentences at the
same level as their healthy counterparts, and generally remain unimpaired on repetition and fluent speech. Verbal fluency tasks are particularly useful in evaluating both semantic and syntactic processing. Semantic, or category, fluency (eg, “Tell me as many animals as you can”) is generally impaired disproportionately to syntactic, or phonemic, fluency (eg, “Tell me as many words that begin with the letter ”) in AD. While decreased information processing and working memory may impair an AD patient’s responses on certain syntactic processing tasks, and comprehension and intelligible speech are likely to decline slowly as the disease advances, patients who present first with nonfluent aphasia or impaired comprehension should be carefully evaluated for conditions other than AD.
Visuospatial Function Deficits in visuospatial abilities are frequently seen in patients with AD, although they generally appear later than memory deficits. Patients may become lost while driving, or even in familiar places (eg, grocery store). Eventually, disorientation may lead to confusion in one’s own home and subsequent wandering behavior. Early deficits, however, are more likely to involve visuospatial problem solving. Neuropsychological tests commonly used involve comparing simple construction or copying, typically not impaired early in the disease, to complex visual reasoning. The presence of constructional apraxia early in the disease may indicate greater pathology in visual processing areas of the brain and has been associated with more rapid symptom progression. In general, more complex drawing tasks, such as three-dimensional figure copy, may be more precise measures of the most common early visual spatial deficits in AD than simple copying tasks.
Motor Function While motor dysfunction has not been typically considered to be a defining symptom of AD, recent research suggests that AD patients may display impairments in gait, motor speed, and general level of activity, and these changes may even be evident during the prodromal phase of the disease. In addition, converging evidence provides support for greater neuropathologic overlap between AD and other conditions, such as dementia with Lewy bodies (DLB), making motor changes such as tremor or gait disturbance more likely in these patients. Importantly, there have been reports of gait disturbance in patients taking cholinesterase inhibitors, a factor that should be carefully monitored when prescribing these medications.
Patients with AD may also exhibit mild ideomotor and ideational apraxia (deficits in skilled movements) due to concrete responses, lack of sufficient external cues, or a disruption in conceptualization ability. However,
moderate-to-severe apraxia is not generally present until later stages of the disease. Incorporating an apraxia assessment into a dementia evaluation is useful, particularly for excluding other disorders that may initially present with more severe skilled movement disorders.
Behavioral changes Mood disturbance is common in patients with AD and has even been recognized more recently as a possible prodrome to the development of dementia in older adults. Depression in AD may manifest as apathy, indifference, poor initiation, or emotional lability. Irritability, agitation, and paranoid ideation can also occur in AD, and may worsen with disease progression, prompting wandering behavior and aggressive outbursts. Repetitive and aimless behavior may also increase as the disease progresses. More severe psychotic symptoms, such as hallucinations and severe delusions, are typically rare in earlier stages of AD in the absence of coexisting disorders. Given the wide range of behaviors that may be exhibited in AD, it is imperative that the patient’s family be provided with ample dementia education, that they have access to social support, and that they possess the coping skills necessary to provide adequate care.
Awareness of deficits Patients in the early stages of AD typically vary in their level of deficit awareness. Whatever the starting point, deficit awareness declines with disease progression. Even when patients do acknowledge their cognitive decline, such awareness may not be “complete” as a result of deficits in executive function. For example, patients may be unable to translate cognitive problems into everyday functional difficulties, and as a result may not understand how their deficits affect certain activities, such as driving, cooking, and managing finances. Again, providing the patient’s caregivers with ample education can increase the likelihood that patients and family will comply with physician recommendations to limit certain unsafe behaviors.
Variant AD presentations While most often the primary early deficit in AD involves recent memory, there are several, less common variant forms of AD that involve primary deficits in executive functioning, language (logopenic aphasia [LPA]) or visuospatial function (posterior cortical atrophy). In such cases, differential diagnoses, including frontotemporal lobar dementia (FTLD), primary progressive aphasia (PPA), Lewy body disease (LBD), and others, must be carefully considered prior to assigning a diagnosis of AD.
Mild Cognitive Impairment Due to AD
“Mild cognitive impairment” describes age-atypical levels of cognitive impairment that do not meet the criteria for dementia and that are not the result of a known medical condition or neurodevelopmental disorder. MCI and AD are not fully distinct entities and can be considered as existing on a cognitive and functional continuum. In its most common usage, the term MCI is assigned to cognitive impairments thought to be related to underlying AD pathology, although criteria for prodromes of other neurodegenerative dementias (eg, Parkinson disease dementia [PDD], FTLD) have been proposed. The clinical conceptualization of MCI relies largely on characteristic cognitive and functional symptoms. Current NIA/AA criteria for MCI due to AD include the following: (1) concerns regarding a change in cognition by either the patient, informant, or a skilled clinician, (2) objective impairment in one or more cognitive domains compared to the patient’s estimated premorbid level of functioning based on age and education, (3) preserved independence in functional abilities, although patients may require some assistance or have mild impairments on more complex tasks such as financial management, navigation to unfamiliar places, etc, and (4) absence of dementia. Other causes, such as traumatic brain injury, cerebrovascular disease, and medical or metabolic abnormalities, should be ruled out as the primary etiology, though they may still play a smaller, contributing role.
While cognitive screeners can detect certain levels of cognitive impairment, they are not all equally sensitive to MCI, especially in persons with higher levels of educational attainment. Further, they may incorrectly identify cognitive impairment in individuals from diverse racial and ethnic backgrounds and those with lower levels of education. In contrast, a comprehensive neuropsychological evaluation is particularly useful in distinguishing normal age-related changes in cognition from MCI, as this type of testing can help ascertain subtle decrements in performance compared to an individual’s estimated baseline level of functioning. Serial neuropsychological assessments are also helpful for monitoring subsequent change in cognition. On these types of assessments, scores that fall 1 to 1.5 standard deviations below the mean for age and education matched peers are typically considered “abnormal,” though no formal cutoff thresholds exist. As is the pattern with AD, MCI typically, though not always, presents with early episodic memory impairment on cognitive testing.
Longitudinal research suggests that 80% of those diagnosed with MCI will go on to develop AD within 5 to 8 years, and will convert at a rate of approximately 10% to 15% per year compared to general population conversion rates of 1% to 2%. Amnestic forms of MCI are more likely to progress to AD than nonamnestic (eg, executive or language forms).
However, the current criteria acknowledge that even initial nonamnestic presentations may also progress to AD. Progression to AD is also thought to be more likely when multiple cognitive domains are affected (amnestic- multi-domain MCI). In addition to converting to AD, both single-domain and multi-domain MCI may further progress to other forms of dementia such as vascular or Lewy body dementia. Because of the heterogeneity inherent to the construct of MCI, as many as 15% to 40% of those diagnosed with MCI may subsequently perform in the normal ranges on neuropsychological assessments or “revert” to normal. The exact reason that individuals with MCI may subsequently revert is multifactorial and incompletely understood. Studies have found that those from community- or population-based versus clinic-based samples are more likely to revert, as are those without positive biomarkers, those who are younger and have fewer medical conditions, and those with higher levels of education (Figure 57-2).
FIGURE 57-2. A conceptual model of mild cognitive impairment (MCI) as prodromal dementia. A minority of persons diagnosed with MCI may remain stable or even improve over time. Although individuals with MCI may decline to vascular or other forms of dementia, the majority of declining MCI patients evaluated in research clinics receive a diagnosis of AD
(either in pure form or mixed with other dementia subtypes). (Reproduced with permission from Golomb J, Kluger A, Garrard P, et al. Clinician’s Manual on Mild Cognitive Impairment.
London, UK: Science Press; 2001.)
Biomarkers, while not yet commonplace in the clinical diagnosis of MCI, are playing an increasingly important role in understanding the pathophysiology of MCI and AD. Further, they can provide some support when deciding whether a particular clinical phenotype may be due to AD and in estimating likelihood of progression to dementia. For instance, one set of proposed criteria grade the likelihood of progression to AD on the basis of biomarker outcomes: MCI due to AD-high likelihood is assigned when there are positive biomarkers for both β-amyloid accumulation and neuronal injury on the basis of imaging and/or lumbar puncture, while the absence of positive biomarkers indicates a process that is unlikely to be related to AD. A more recent set of biomarker-based research criteria for MCI and AD incorporates measures of amyloid, tau, and neurodegeneration. However, these criteria are not yet recommended, nor widely used, in clinical diagnosis. In the future, these biomarkers as well as less invasive and less costly blood-based biomarkers currently in development (eg, plasma phosphorylated tau181) may eventually be widely used in both clinical and research settings to differentiate between MCI due to AD and other dementia types.
Palliative care concerns specific to AD Patients with AD generally have a gradual progression of cognitive and functional impairment, though the absolute course and trajectory is slightly different for each person. Education about the expected course of illness and potential safety concerns is useful, as it allows patients and families to prepare advance directives and durable powers of attorney. Some may also make other lifestyle changes, like moving to a single level house or considering a move to a continuing care retirement community. Addressing driving safety is important in AD and all varieties of dementia, as patients with dementia will ultimately need to refrain from driving. Behavioral or lifestyle interventions, such as increasing exercise, healthful eating behaviors, and engagement in cognitive and social activities, are also useful at this stage and help maintain physical health and quality of life. There are several pharmacological interventions that are approved by the US Food and Drug Administration for memory loss in AD, including donepezil, galantamine, rivastigmine (early-to-moderate stage), and memantine (moderate-to-severe-stage), that can be considered by the
patient’s treating physician. In moderate-to-later stages of the disease course, patients will become dependent for basic daily activities such as eating, bathing, and grooming, and may also display challenging behaviors like wandering or agitation. Maintaining a regular, predictable schedule that includes some form of activity, ensuring good nighttime sleep, and assessing for other causes of agitation (eg, infection; pain) can be helpful in these situations. Occasionally, medications can be prescribed to combat agitation or aggression, though certain types of medications (eg, antipsychotics, sedatives) often have unwanted side effects in older adults or those with dementia, so these decisions need to be made judiciously. Patients with late- stage disease often experience difficulties with swallowing, communication, and/or bladder and bowel control and need round-the-clock-care. In moderate to late stage disease, many families find they need to rely on hired caregivers or consider an assisted living facility for assistance in managing their loved ones’ needs. Caregiver support and education is essential at all stages of the disease.
Lewy Body Disorders
Lewy body dementia (LBD) encompasses both DLB and PDD. The primary neuropathologic feature in both diseases is the accumulation of cortical Lewy bodies, resulting from abnormal aggregation of α-synuclein. However, additional pathologic features, including accumulation of tau protein, concurrent vascular and/or AD pathology, neurotransmitter abnormalities, and frontostriatal projections that have been disrupted by loss of dopaminergic neurons, can all contribute to the cognitive decline associated with these diseases.
Although PD and DLB have historically been considered related but separate clinical entities, many of the neuropathologic, clinical, cognitive, and psychiatric characteristics of the two diseases have considerable overlap. Both are associated with cognitive fluctuations, visual hallucinations, psychiatric symptoms, and REM sleep behavior disorder in the context of PD motor symptoms, and thus recent debates have examined whether PD, PDD, and DLB should be placed along a clinical continuum representing the same underlying pathology. Currently, the differential diagnosis is based on the timing of the onset of the cognitive symptoms. A patient with PD who develops dementia after 1 year of well-established motor symptoms is classified as “PDD,” while a patient with motor
symptoms occurring after the onset of dementia (or within one year of motor symptom onset) is classified as “DLB.”
Parkinson disease Diagnosis of PD according to the United Kingdom Parkinson Disease Society Brain Bank clinical diagnostic criteria requires specific motor symptoms, including bradykinesia and at least one of the following: muscular rigidity, rest tremor, and/or postural instability. Although motor symptoms have long been considered the defining feature of PD, a wide range of associated nonmotor symptoms, such as impaired sleep patterns, psychiatric symptoms, gastrointestinal dysfunction, and cognitive impairment, are increasingly recognized as having substantial impact on functional abilities and quality of life for these patients. Of these nonmotor symptoms, cognitive dysfunction is a substantial concern, with upwards of 80% of patients who live for 20 years or more with PD expected to develop PDD over the course of the disease. Current Movement Disorder Society task force recommended consensus diagnostic criteria for PDD include (1) a diagnosis of PD, (2) PD symptoms developed prior to the onset of dementia,
(3) impaired global cognition (eg, on MMSE or MoCA), (4) cognitive impairment severe enough to impair activities of daily living, and (5) impairments in more than one cognitive domain on detailed neurocognitive testing. Additional supportive symptoms may include psychiatric symptoms (eg, depression, apathy, delusions) or impaired sleep. Even among patients without dementia, however, the rate of concurrent cognitive impairment can be quite high. PD-mild cognitive impairment (PD-MCI), similar to MCI due to AD, is characterized by subjective cognitive decline (noted by the patient, collateral, or clinician), objective impairments on neuropsychological assessment, and absence of functional impairment sufficient to significantly interfere with functional independence in the presence of clinically verified PD. PD-MCI is common, with overall prevalence estimates approximately 30% to 40%. Cognitive deficits often emerge early during the course of the disease, with 10% to 30% of newly diagnosed patients with PD identified with cognitive impairment. However, there is substantial variability in the nature and course of cognitive symptoms in PD, with many patients maintaining a stable or fluctuating course and others demonstrating more rapid decline (Figure 57-3). Several variables associated with more rapid cognitive decline have been identified, including age, disease duration, sex, and specific genetic mutations.
FIGURE 57-3. Change in cognitive diagnostic status over time in a multi-site Parkinson disease (PD) cohort. The number inside each node represents the number of people with the corresponding cognitive status indicated by its color. The nodes with dashed lines represent people with only data from the first visit. The links represent the group participants who continued to the next visit. PDD, PD with dementia; PD-MCI, PD with mild cognitive impairment; PD-NCI, PD with no cognitive impairment. (Reproduced with permission from Phongpreecha T, Cholerton B, Mata IF, et al. Multivariate prediction of dementia in Parkinson’s disease. NPJ Parkinsons Dis. 2020;6:20.)
Most patients with PD exhibit at least some decline in attention, working memory, processing speed, or other executive functions, although the nature and degree of impairments across other domains are variable. PDD is often characterized by worsening visuospatial deficits, impaired verbal fluency, difficulty planning or shifting to a new stimulus, slowed information processing speed, and impaired memory. Memory impairment is most frequently attributable to a retrieval deficit since recognition is often intact. Procedural learning may be impaired, a pattern atypical in normal aging or in AD. Some language skills are intact, such as vocabulary, while others that
tap additional cognitive domains, such as verbal fluency, may be impaired. Mechanical aspects of speech are often impaired as well. Although PDD has been previously characterized as a “subcortical” dementia to distinguish it from cortical dementias such as AD, this characterization has been criticized more recently and may serve simply as a gross depiction of the cognitive profile. Indeed, recent research suggests that while initial mild cognitive deficits in PD likely result from depleted dopamine in the midbrain and resulting defects in the frontostriatal loop, cortical pathology is required for progression to dementia.
Dementia with Lewy bodies Current consensus criteria for probable DLB include
(1) a diagnosis of dementia (defined as cognitive impairment sufficient to impair functional abilities) and (2) two or more core clinical features (fluctuating cognition, well-formed visual hallucinations, rapid eye movement [REM] sleep behavior disorder, and/or one or more cardinal motor symptoms of PD), or one core clinical features in the presence of one or more indicative biomarkers (eg, reduced dopamine uptake on SPECT or PET scan, REM sleep disorder diagnosed by polysomnography). As discussed above, DLB and PDD share many of the same features, including cognitive fluctuations, neuropsychiatric features, and motor symptoms. However, the timing and severity of these symptoms may differ. Visual hallucinations often occur earlier in DLB, delusions may be more common, and a differential response to antiparkinsonian medications has been reported. In terms of cognition, prominent visuospatial and executive impairments are noted, similar to PDD. Memory impairments, although not necessary for diagnosis, tend to be more prominent in DLB. It is thus unsurprising that among patients with DLB, there tends to be a likelihood of concurrent AD pathology.
Given the overlap in symptoms and pathology, differential diagnosis between DLB and AD can thus also be difficult. The presence of visual hallucinations in patients with MMSE scores greater than 20 is highly suggestive of DLB. Neuropsychological studies have identified typical cognitive profiles that may aid in diagnosis. In DLB, patients have more difficulty than AD patients in copying complex designs, assembling pieces of an object, or completing other tasks requiring visuospatial skills. In contrast, AD subjects generally show significantly more impairment on delayed recall tasks than patients with DLB. DLB patients have attentional skills that are generally equivalent to those of AD patients; however, patients with DLB
exhibit significant attentional fluctuations. As a result, evaluating attention over time is more helpful than the overall severity of attention problems in the differential diagnosis. Finally, these cognitive profiles are most evident early in the course of the diseases. As the diseases progress, all cognitive functions become impaired and neuropsychological testing is less helpful for diagnosis.
Palliative care concerns specific to PD and DLB There is increasing recognition that appropriate palliative care that addresses cognitive changes in PD and DLB is both vital and underutilized. Given the range of symptoms in PD and associated caregiver and patient burden, an early focus on nonmotor symptom control, adjustment to cognitive changes, and advance care planning is strongly recommended in addition to routine care that has historically focused nearly solely on motor symptom management. To address the problems associated with increasing cognitive decline, occupational therapy, integrated care models, and psychoeducational programs for patients and family members may be helpful tools. In general, however, both patients and caregivers report that they are not provided with enough information about the nature and prognosis of the diagnosis, and thus lack the tools necessary to address end-of-life care issues and mitigate problems associated with advancing cognitive disease, such as medication reconciliation and home safety issues. DLB is associated with specific difficulties in accessing palliative care, largely due to lack of knowledge about the diagnosis in the general medical community and related difficulty in adequately diagnosing DLB. Further, patients with DLB often present with behavioral problems that may result in difficulty accessing resources. As a result, patients are often not appropriately medicated and physicians rarely discuss what to expect as the disease progresses. Assessment resources such as the Palliative Care Outcome Scale or the Edmonton Symptom Assessment System may be helpful to identify care needs in patients. However, provider education is a vital initial step to providing adequate community palliative care for PD/DLB.
Vascular Cognitive Impairment
VCI is an umbrella term that includes both mild VCI and major VCI (VaD) in the context of imaging evidence for cerebrovascular disease. A diagnosis of mild VCI requires impairment in at least one cognitive domain with mild or no impairment in activities of daily living that are independent from any motor or sensory impairments caused by the vascular event. A diagnosis of
major VCI (or VaD) requires significant deficits in at least one cognitive domain along with correlated impairment in daily functional activities (again that are independent of any motor or sensory sequelae associated with the vascular event). VaD subtypes include poststroke dementia (the only subtype that requires a clear temporal relationship between vascular event and cognitive decline), subcortical ischemic VaD, multi-infarct dementia, or mixed dementia (representing combined suspected vascular disease and other neurodegenerative diseases including AD and LBD).
Cognitive profiles of patients with VCI, especially in mild or early forms, can be quite variable given the underlying neuropathologic heterogeneity associated with the diagnosis. Executive function deficits, reduced verbal fluency, and slow processing speed are common. Depression, irritability, and lack of initiative are also frequently seen in patients with VCI. Contrary to long-standing clinical lore, VCI does not necessarily present primarily with a clear temporal relationship to a known vascular event nor to stepwise deterioration in cognition. Rather, continuous small vessel insults can lead to slowly progressive decline in cognitive abilities.
Thus, it can be difficult to differentiate VaD from AD on the basis of the clinical course of symptoms. Detailed review of cardiovascular risk factors and cognitive profile may provide better differentiation. A review of studies examining early-stage AD and VaD found that the latter group had more pronounced deficits in executive function on tests such as the Wisconsin Card Sorting Test and the executive function scale of the Mattis Dementia Rating Scale than did adults with AD. Interestingly, performance between the two groups was similar on tests of selective attention and working memory such as the Trail-Making Test and Stroop Color Word Interference Test. In contrast, VaD patients had better performance than AD patients on tests of verbal learning and story recall, such as the California Verbal Learning Test and the Logical Memory subtest of the Wechsler Memory Scale-Revised (WMS-R), with fewer intrusions. However, VCI can impact the structure and function of the hippocampus, thus leading to more notable memory deficits in some patients, although these impairments may not occur until later in the disease process. Because memory impairment is most often the reason patients seek evaluation, adults with VaD may have more progressed dementia and greater cognitive impairment at the time of diagnosis. It is important to note that as both VaD and AD progress, the cognitive profiles become more similar, so that differentiating mid-stage disease is very
difficult. In addition, neuropathologic studies have shown that many patients previously diagnosed with VaD due to presence of vascular risk factors such as diabetes, hypertension, and radiologic evidence of ischemia have prominent AD pathology as well. Prevalence estimates of the co-occurrence of AD and VaD range from 20% to 40% of patients with dementia. Few studies have attempted to differentiate between mixed AD/VaD and either form of dementia on a neuropsychological basis, although it has been suggested that mixed dementia most closely resembles VaD from a cognitive perspective. Vascular pathology increases the likelihood that patients with neuropathologic AD will show significant cognitive impairment.
Palliative care concerns specific to VCI Given the variable (and often unknown) progression of cognitive symptoms in VCI, palliative care concerns are likely to be patient-specific, and may include lifestyle interventions aimed at maximizing retained cognitive abilities, quality of life, and overall physical health. Occupational therapy, psychoeducation, managing comorbid depression, and training and support for caregivers may all be helpful interventions. In advanced major VCI, palliative care interventions and end- of-life planning and preparation similar to those used most frequently in AD are recommended.
Frontotemporal Lobar Degeneration
Frontotemporal lobar degeneration (FTLD) is caused by a range of underlying neuropathologic conditions (including intracellular inclusions of tau or transactive response DNA-binding protein [TDP-43], among others). FTLD onset occurs at slightly younger ages than other conditions such as AD, with onset occurring most commonly in the late 50s-to-early 60s. It is thought to be the second most common form of dementia in individuals under the age of 65. Roughly 30% to 40% of FTLD cases are linked to genetic mutations, including granulin (GRN) and microtubule-associated protein tau (MAPT) mutations, and patients with some types of genetic mutations may develop symptoms at even earlier ages. As with AD, FTLD patients show insidious onset of symptoms with a gradual progression. Various clinical presentations can be caused by FTLD, primarily resulting in gradually progressive disturbances in language and/or behavior. Although comparisons of AD and FTLD groups do not always reveal significantly different cognitive profiles, in general, FTLD patients show relatively spared memory performance in comparison to their executive and language functioning, especially when
memory cues are provided. Apraxia is also more common in FTLD than in AD. Often a multidisciplinary approach may be most helpful in the differential diagnosis since studies indicate that up to 75% of pathologically confirmed FTLD patients also appear to meet clinical criteria for probable AD, and certain subtypes of FTLD are often misdiagnosed as a primary psychiatric disorder early on. The following diagnostic variants are currently recognized as most likely resulting from FTLD, although the extent to which these conditions may have shared pathology with other diseases, and the likelihood that these represent distinct disorders is not currently well defined.
Behavioral variant FTLD This diagnostic category (bvFTD) accounts for nearly 50% to 60% of all FTLD diagnoses. This variant most often presents initially with a prominent decline in social cognition and behavior and/or executive dysfunction, with relative sparing of other cognitive functions, and includes at least three of the following: (a) disinhibition, (b) apathy, (c) diminished empathy/sympathy, (d) perseverative, stereotyped, or compulsive behaviors, and/or (e) hyperorality/dietary changes. These behavioral and/or executive symptoms are the source of impairment in daily activities. While these criteria are sufficient for a diagnosis of “possible” bvFTD, a diagnosis of “probable” bvFTD requires neuroimaging findings of disproportionate frontal or temporal lobe involvement, or a pathogenic mutation. Early in the disease process, performances on formal neuropsychological testing may be remarkably well preserved, underscoring the importance of a thorough diagnostic interview and history with the patient and an informant; however, many individuals do display impaired selective and divided attention, difficulty shifting mental set, poor abstraction and reasoning, impaired verbal fluency, and perseverations. Unfortunately, cognitive screeners such as the MMSE are not very helpful in screening for early bvFTD since many patients score within normal limits early in the disease.
Primary progressive aphasia PPA typically occurs in the fifth or sixth decade of life, and its rate of progression varies greatly. The diagnosis of PPA requires initial prominent language dysfunction with relative sparing of other cognitive domains early on, and the absence of radiologic evidence of cerebrovascular or other neurologic injury that would account for the aphasia. Because anomia and other language deficits may occur in a number of neurodegenerative conditions, the differential diagnosis of PPA rests on the clear demonstration that nonlinguistic cognitive and behavioral functions
are intact during the initial stages of the disease. The clinician must carefully determine whether poor performance on memory and other nonlanguage tests is due to language deficits such as impaired comprehension of instructions.
Differential diagnosis is also complicated by the heterogenous etiologies of PPA demonstrated by the different subtypes. Although not consistent across all cases, agrammatic PPA is often associated with tau pathology, logopenic PPA with AD pathology (although not always in brain regions typically associated with AD), and semantic PPA with TDP-43 pathology. PPA variants include (1) agrammatic/nonfluent aphasia and (2) semantic variant PPA (sometimes called semantic dementia [SD]). As noted, a third variant, LPA, is often included under the larger umbrella of PPA; however, neuropathologically, LPA is primarily attributed to AD-type pathology, so it is often considered a language-variant of AD.
Progressive Nonfluent/Agrammatic Aphasia Progressive nonfluent/agrammatic aphasia (PNFA) is characterized by labored articulation and/or agrammatism, frequently occurring alongside apraxia of speech. Patients with PNFA also typically have intact comprehension for simple speech and phrases but impaired comprehension for complex or syntactically irregular phrases. Semantic knowledge is also well preserved.
Semantic Dementia SD, on the other hand, involves a loss of ability to understand words (semantic knowledge of words and objects), which is accompanied by marked deficits in confrontational naming. Other deficits include difficulty with visual recognition of objects, dyslexia, and dysgraphia. Patients with semantic dementia may display prominent visual agnosias and may not be able to demonstrate object use accurately. In contrast to PNFA, speech production remains intact in early disease as does repetition.
Logopenic Aphasia LPA is the most recently characterized PPA variant. LPA presents with impaired word retrieval in spontaneous speech and difficulty with naming, in addition to impaired sentence repetition, and in some instances, sentence comprehension. The hypothesized mechanism behind some of the language deficits in LPA, especially impaired sentence repetition, is a deficiency in short-term working memory, and hence, single- word repetition, which requires minimal working memory, is well preserved. The naming deficit in LPA is not as dramatic as in SD, and these patients also have preserved object knowledge. While patients with LPA
demonstrate effortful word retrieval, this again is less severe than in PNFA and their speech is usually grammatically accurate and free from motor speech abnormalities, which marks another useful distinction between LPA and other PPA subtypes.
FTLD movement disorders Certain movement disorders, which may also include behavioral and/or language disturbance, may be caused by or associated with FTLD.
FTLD-Motor Neuron Disease Frontotemporal dementia with motor neuron disease (FTD-MND) tends to be a rapidly progressive condition, with death typically occurring 3 to 5 years after symptom onset. Age of onset is more variable and can range from 35 to 75, on average. The FTD-MND syndrome is typically characterized by gradual onset of both cognitive and psychiatric symptoms, in addition to, though not necessarily simultaneously to, development of classic upper and/or lower motor neuron dysfunction, such as muscle wasting, paraparesis, fasciculations, or abnormal reflexes. Early bulbar involvement can also lead to symptoms of dysphagia and pseudobulbar affect, or uncontrollable episodes of laughing or tearfulness.
Behaviorally, these patients may display similar symptoms to patients with bvFTD. Cognitive symptoms can include variable language and executive dysfunction.
Progressive Supranuclear Palsy Although Lewy bodies are found in only the minority of progressive supranuclear palsy (PSP) cases, PSP is frequently misdiagnosed as PD. A core feature of the disorder is vertical supranuclear gaze palsy, although this symptom may not present early in the course of the disease. Patients also present with postural instability, and falls are often seen shortly after onset. Cognition is characterized by mental and psychomotor slowing and notable executive dysfunction, often fairly early in the disease course. Memory impairments are observed, but they are not as severe as in AD. Language functions resemble those seen in PD. Visual spatial deficits and increased apathy are also observed.
Corticobasal Syndrome Corticobasal syndrome (CBS) is the primary clinical phenotype of corticobasal degeneration, which is characterized by abnormal tau deposition, and relatively focal and asymmetric cortical atrophy in frontal and parietal brain regions on imaging. The mean age of onset for CBS is in the early 60s. CBS is characterized by progressive, asymmetric motor symptoms such as tremor, loss of coordination, rigidity, and myoclonus, and a
higher prevalence of alien limb syndrome. Unilateral apraxia (most commonly ideomotor) is common. A range of language abnormalities including slowed verbal fluency, and/or executive dysfunction may also be seen in CBS. In contrast to AD, memory is typically well preserved early on, but may worsen with disease progression and may even become prominent in some patients. Neuropsychiatric and behavioral symptoms are common and can include apathy, personality change, disinhibition, and irritability.
Palliative care concerns specific to FTLD Given the heterogeneity in FTLD phenotypes, care recommendations vary by subtype. Many of the core symptoms of bvFTD and related subtypes, including impulsivity, poor judgment, and disinhibition, can be distressing for family members, so education about bvFTD and early support for family and caregivers is essential. Education about possible environment and behavioral modifications is recommended and typically focuses on mitigating safety risks (eg, driving impulsively, poor management of money, falling victim to scams, etc) given reductions in insight and self-awareness are quite common in bvFTD. Early consultation with a psychiatrist is often helpful as well; there is some evidence that treatment with selective serotonin reuptake inhibitors may be helpful in managing unwanted behavioral symptoms.
Patients with either primary (PPA subtypes) or secondary language dysfunction (PSP; CBS; FTD-MND) may benefit from consultation with a speech-language pathologist (SLP) early on in the course of their disease, who may be able to provide useful strategies. Augmentative and alternative communication devices can also be employed as the condition progresses. SLPs may also be of benefit as part of a multidisciplinary team as some of these patients may have, or go on to develop, dysarthria and/or dysphagia. Physical therapy can also be a valuable tool to assist with early motor dysfunction in FTD-MND, PSP, and CBS. Like most neurodegenerative conditions, motor and/or sensory dysfunction can occur in other FTLD subtypes as well as the disease progresses to moderate or severe stages.
Dementia Due to Suspected Non-Alzheimer Disease Pathophysiology Dementia due to suspected non-Alzheimer disease pathophysiology (SNAP) is a relatively new biomarker-based term that is used to describe individuals who have evidence of neurodegeneration on neuroimaging, but who lack biomarkers classic to AD—specifically, amyloid. SNAP is more prevalent with increasing age. While there is little evidence of SNAP in persons
younger than 50, prevalence increases with increasing age after the age of 60. Most commonly, the term SNAP has been used to describe individuals with either normal cognition or MCI who lack evidence of clinically significant amyloid but have evidence of neurodegenerative changes in the brain.
Neurodegeneration characteristic of SNAP is also observed in individuals with clinical dementia; however, these individuals’ condition is usually attributed to a non-AD etiology based on clinical presentation (eg PPA, DLB), and it is hypothesized that TDP-43, hippocampal sclerosis, vascular disease, and/or other pathophysiological processes may be the causative factor. No definitive cognitive phenotype has been identified in individuals with SNAP who are deemed clinically normal, though SNAP does appear to confer risk for subsequent cognitive decline. Not surprisingly, those with SNAP and MCI have greater likelihood of progression to dementia than those with SNAP and no evidence of MCI. Further, those with SNAP, regardless of cognitive status, have a greater risk of decline compared to those without SNAP. The data is mixed on the risk of decline in SNAP compared to those without SNAP but who are amyloid and/or tau positive, though it is generally accepted that the presence of neurodegeneration, amyloid, and tau combined confers the greatest risk for accelerated cognitive decline.
Limbic-Predominant Age-Related TDP-43 Encephalopathy
Limbic-predominant age-related TDP-43 encephalopathy (LATE) is a common, but relatively recently described finding in those in their eighth or ninth decade of life. Estimates vary, but recent autopsy studies of individuals aged 80 and older have identified evidence of LATE in 5% to 50% of their samples. This proteinopathy was first discovered in autopsies of individuals with cognitive impairment that mimicked AD; that is, these individuals had an amnestic cognitive syndrome similar to AD. Like AD, LATE can also evolve to include multiple cognitive domains but the clinical presentation remains distinct from other TDP-related conditions such as FTLD-TDP. Given the occurrence in very late life, it is unsurprising that the pathological changes of LATE often co-occur with those of AD, LBD, and/or VCI. While data on this entity is still in its infancy, there is some indication cognitive changes progress more slowly in those who only have evidence of LATE compared to those with evidence of multiple pathologies.
Alcohol-Related Dementia
Contradictory evidence exists as to the role of mild-to-moderate alcohol and risk for developing certain dementias, including AD and VaD. Chronic and profound alcohol use, however, can have a negative effect on cognition and may exacerbate the cognitive symptoms of other dementias and brain injuries. Poor nutrition (thiamine deficiency in particular) resulting from alcohol abuse is a primary contributor to the onset of cognitive problems. In addition, liver disease can interfere with thiamine regulation in the brain and may be a factor in the multiple cognitive and motor impairments associated with long- term alcohol use.
Persistent alcohol dementia Alcohol dementia involves impairment in more than one area of cognitive function that persists after the patient stops drinking for a period of time. Visuospatial problem-solving deficits and executive problems, including apathy, decreased judgment, and reduced interest in self- care, are prominent in these patients. Memory problems, in particular anterograde amnesia, are also common, but are generally not more impaired than other cognitive domains, and recognition is often intact. Typical neuropsychological sequelae include impairments on tasks requiring visual scanning, visuospatial organization, perceptual-motor speed, sustained attention, abstraction, and mental flexibility, while language functions are generally preserved. Perseveration and confabulation are common indicators of impaired executive function in the responses of patients with chronic alcohol use. It is also noteworthy that chronic alcohol use may potentiate the onset of AD, and produce a clinical picture of conjoint cognitive deficits.
We rnicke–Korsakoff syndrome The most severe neurologic outcome of heavy and prolonged alcohol use, and the result of critical malnutrition, is Wernicke– Korsakoff syndrome. In contrast to patients with persistent alcohol dementia, Wernicke–Korsakoff patients exhibit an acute symptom onset, often beginning with a grave confusional state, nystagmus, and significant ataxia. During this phase, symptoms progressively and rapidly worsen if treatment (immediate thiamine replacement) is not applied. This phase is almost always followed by a chronic and progressive stage that is associated primarily with impaired frontal and cerebellar functions. Unlike persistent alcohol dementia, Korsakoff patients have significant impairments in memory relative to other cognitive effects, and memory impairment includes both retrograde and anterograde amnesia for episodic events, frequently with prominent confabulation. In contrast to AD, semantic memory is relatively spared in the Korsakoff patient. Patients show a characteristic gradient of remote memory
impairment, with better recall for remote events and progressively reduced recall of recent events. As with persistent alcohol dementia, executive dysfunction and visuospatial impairments are also significant symptoms of the syndrome. Cerebellar atrophy and peripheral nerve damage lead to impaired gait, decreased or abnormal reflexes, and other movement abnormalities in these patients.
Palliative care concerns specific to alcohol-related dementias Alcohol-related dementias, unlike progressive dementias such as AD and DLB, may be amenable to interventions if made in a timely manner (such as drinking cessation and nutritional supplementation), which may improve cognitive symptoms or stall further cognitive decline. Once these important interventions have been put in place, encouraging skill maintenance, providing adequate scaffolding for cognitive skills, maintaining daily structure and routine, and help with general self-care may be useful for maintaining maximal cognitive function.
Prion Diseases
The prion diseases are a group of rare fatal spongiform encephalopathies that result from mutations and polymorphisms in the prion protein gene (PrP), causing rapid neurodegeneration. These diseases, of which Creutzfeldt– Jakob is the most well-known, produce a profound and quickly progressive dementia, and may be sporadic, familial, or infectious. Sporadic cases are the most common, and are generally diagnosed in people in their 60s, with a typical age range between 40 and 80. Early cognitive signs of the prion diseases are usually vague and nonspecific, such as poor memory, concentration, and problem solving. Initially, there are also often psychiatric symptoms, including apathy, emotional lability, impaired sleep, and appetite loss. Early frank neurologic symptoms are not common, but as the disease progresses, hyperreflexia, impaired coordination, changes in saccadic eye movements, and incontinence may occur. Given the early vague symptoms and dearth of neurologic symptoms, patients are not likely to present for evaluation until they are in the more moderate-to-advanced stages, which can occur in a matter of months. Diagnosis typically involves measuring electroencephalographic changes, hyperintensities on MRI, and abnormal 14- 3-3 protein deposits in the CSF. The most common differential diagnoses include depression, AD, and LBD. Given the generally rapid course of disease in combination with diagnosis that typically occurs in the later stages
of disease, palliative care often consists largely of hospice care, social work interventions to address disability and making end-of-life decisions, and bereavement resources and interventions for caregivers.
Normal Pressure Hydrocephalus
NPH is a potentially reversible dementia that makes up about 6% of dementia cases. Abnormalities in the production, absorption, or flow of CSF result in ventricular dilatation. Patients may present with a triad of clinical symptoms that include gait or balance disturbance, urinary incontinence, and cognitive deficits. Unlike most other dementias, cognitive symptoms often present later in the course. This can make early clinical diagnosis difficult since gait abnormalities and incontinence have a variety of etiologies in geriatric populations. Radiographic evidence and intraventricular pressure measurement aid in the diagnosis. When cognitive deficits are present, they are most frequently observed in executive functioning. Although many subjects may have subjective memory complaints, memory deficits are not a prominent early symptom, and some memory declines are attributable to attention problems, which are more common. However, many of these patients may have concurrent underlying neurodegenerative disease, and thus may present with varied cognitive profiles.
When treated, a ventriculoperitoneal shunt is usually used to divert CSF for better absorption. However, surgery in geriatric populations always involves added risks, and the benefits of shunt surgery remain unclear. A wide variety of success rates have been reported, with better outcomes often reported after shorter follow-up periods. Patients with the full triad of symptoms appear to respond best to shunt surgery. Gait problems show the most frequent improvement, while cognitive function improves in the fewest patients. Recently, findings from a 5-year follow-up of NPH patients with and without shunt surgery showed that at the 6-month assessment, 83% of the shunt cases improved in gait and 46% improved in memory. Of surviving shunt cases 5 years after surgery, 39% remained improved in gait, and fewer than 10% continued to show improvements on cognitive tests. Results suggested that outcomes may be improved in younger patients. Palliative care may involve inpatient and outpatient rehabilitation, physical therapy, occupational therapy, and other interventions aimed at maximizing retained cognitive functions.
HIV-Associated Neurocognitive Disorder
Although there is often the perception that geriatric patients are not at risk for human immunodeficiency virus (HIV) infection, the Centers for Disease Control and Prevention reported that 10% of all HIV cases in the United States are in patients of 50 years or older, and these numbers are expected to grow. With the use of combination antiretroviral therapies, progression to HIV-associated dementia is rare (2–3%). However, the prevalence of milder cognitive deficits is more frequent, with prevalence estimates ranging from 50% to 60%. Commonly affected areas of cognition are speed of information processing, attention, and motor speed, although it is increasingly recognized that impairments in broader executive functions, learning, and prospective memory are prevalent. As a result, differentiating these symptoms from early AD can be difficult clinically and may require more careful evaluation of both AD- and HIV-related CSF and imaging biomarkers. Palliative care depends upon severity of disease but should focus on managing potential multiple medical and psychosocial comorbidities and end-of-life planning as appropriate.
Neurosyphilis
Despite successes in treatment and education, syphilis cases have continually increased since 2000. Neurosyphilis can occur any time during the disease; however, syphilitic dementia may occur as the disease advances (generally 5–25 years after initial infection) that may present with hallucinations, delusions, mood disturbance, personality change, strokes, ataxia, or cognitive decline. Deficits are observed in short-term memory and mental status with progressive cognitive decline in all areas of functioning. Although neurosyphilis is often classified as a reversible dementia, there is only limited evidence to support cognitive benefits with penicillin treatment.
Thus, following treatment with penicillin, palliative care may include occupational therapy or cognitive rehabilitation to help maximize remaining cognitive function. Neurosyphilis is more likely to occur in patients with comorbid HIV, and thus both should be considered during differential diagnosis. Neurosyphilis should further be considered in a differential diagnosis of dementia of unclear etiology in geriatric patients.
CONCLUSION
We have greatly furthered our understanding that age-related medical conditions not considered classically neurologic in nature can nevertheless impact the central nervous system and thereby affect cognition. This knowledge has led to the realization that many of the changes in cognition previously thought to be unavoidable concomitants of normal aging are in fact preventable and in some cases even reversible. The deleterious consequences of not treating such disorders have become evident, given that many common diseases such as T2DM and hypertension appear to be risk factors for dementia. In turn, early identification of dementia or the prodromal condition MCI will become critical as therapeutic options for delaying disease progression proliferate. Careful characterization of cognitive status through neuropsychological assessment can provide the clinician with essential information to determine whether the patient is experiencing symptoms that warrant concern or further treatment. As the field of geriatrics approaches the goal of controlling or even preventing endemic late-life chronic diseases, it will become increasingly clear that deleterious cognitive changes that occur with healthy aging are fewer and more subtle than we thought, and that they are accompanied by age-related strengths in experience and knowledge that will enable us to lead vital, productive lives well into our 80s and beyond.
FURTHER READING
Albert MS, DeKosky ST, Dickson D, et al. The diagnosis of mild cognitive impairment due to Alzheimer’s disease: recommendations from the National Institute on Aging-Alzheimer’s Association workgroups on diagnostic guidelines for Alzheimer’s disease. Alzheimers Dement.
2011;7(3):270–279.
Bennett S, Thomas AJ. Depression and dementia: cause, consequence or coincidence? Maturitas. 2014;79(2):184–189.
Berger I, Wu S, Masson P, et al. Cognition in chronic kidney disease: a systematic review and meta-analysis. BMC Med. 2016;14(1):206.
Biessels GJ, Despa F. Cognitive decline and dementia in diabetes mellitus: mechanisms and clinical implications. Nat Rev Endocrinol.
2018;14(10):591–604.
Burkauskas J, Lang P, Bunevičius A, et al. Cognitive function in patients with coronary artery disease: a literature review. J Int Med Res.
2018;46(10):4019–4031.
Cairns NJ, Bigio EH, Mackenzie IR, et al. Neuropathologic diagnostic and nosologic criteria for frontotemporal lobar degeneration: consensus of the Consortium for Frontotemporal Lobar Degeneration. Neuropathol. 2007;114(1):5–22.
Cheng C, Huang CL, Tsai CJ, et al. Alcohol-related dementia: a systematic review of epidemiological studies. Psychosomatics. 2017;58(4):331– 342.
Emre M, Aarsland D, Brown R, Clinical diagnostic criteria for dementia associated with Parkinson’s disease. Mov Disord. 2007;22(12):1689– 1707.
Inouye SK, Westendorp RG, Saczynski JS. Delirium in elderly people.
Lancet. 2014;383(9920):911–922.
Iwasaki Y. Creutzfeldt-Jakob disease. Neuropathology. 2017;37(2):174– 188.
Krause D, Roupas P. Effect of vitamin intake on cognitive decline in older adults: evaluation of the evidence. J Nutr Health Aging.
2015;19(7):745–753.
Laursen P. The impact of aging on cognitive function: an 11-year follow-up study of four age cohorts. Acta Neurol Scand Suppl. 1997;172:7–86.
Litvan I, Goldman JG, Tröster AI, et al. Diagnostic criteria for mild cognitive impairment in Parkinson’s disease: Movement Disorder Society Task Force guidelines. Mov Disord. 2012;27(3):349–356.
Loggia G, Attoh-Mensah E, Pothier K, et al. Psychotropic polypharmacy in adults 55 years or older: a risk for impaired global cognition, executive function, and mobility. Front Pharmacol. 2020;10:1659.
McKeith IG, Boeve BF, Dickson DW. Diagnosis and management of dementia with Lewy bodies: fourth consensus report of the DLB consortium. Neurology. 2017;89(1):88–100.
McKhann GM, Knopman DS, Chertkow H, et al. The diagnosis of dementia due to Alzheimer’s disease: recommendations from the National Institute on Aging-Alzheimer’s Association workgroups on diagnostic guidelines for Alzheimer’s disease. Alzheimers Dement. 2011;7(3):263–269.
Olaithe M, Bucks RS, Hillman DR, et al. Cognitive deficits in obstructive sleep apnea: insights from a meta-review and comparison with deficits
observed in COPD, insomnia, and sleep deprivation. Sleep Med Rev. 2018;38:39–49.
Oliveira LM, Nitrini R, Román GC. Normal-pressure hydrocephalus: a critical review. Dement Neuropsychol. 2019:13(2):133–143.
Skrobot OA, Black SE, Chen C, et al. Progress toward standardized diagnosis of vascular cognitive impairment: Guidelines from the Vascular Impairment of Cognition Classification Consensus Study. Alzheimers Dement. 2018;14(3):280–292.
Smail RC, Brew BJ. HIV-associated neurocognitive disorder. Handb Clin Neurol. 2018;152:75–97.
Chapter
Delirium
Matthew E. Growdon, Tanya Mailhot, Jane S. Saczynski, Tamara
G. Fong, Sharon K. Inouye
Delirium, an acute disorder of attention and global cognitive function, is a common, serious, and potentially preventable source of morbidity and mortality for hospitalized older persons. Delirium affects as many as half of all people age 65 and older who are hospitalized. With the aging of the US population, delirium has assumed heightened importance because persons aged 65 and older presently account for nearly 40% of all days of hospital care. Total costs attributable to delirium spanning the hospital and posthospital period exceed $60,000 per patient; annually over $183 billion (in 2018 US dollars) of US health care costs are attributable to delirium.
Importantly, delirium is preventable in up to 50% of cases. Substantial additional costs linked to delirium accrue after hospital discharge because of the increased need for institutionalization, rehabilitation services, closer medical follow-up, and home health care. Delirium often initiates a cascade of events in older persons, leading to a downward spiral of functional and cognitive decline, loss of independence, institutionalization, and ultimately, death. Delirium is a critical risk marker to identify patients at high risk for poor outcomes. Recently, this fact has been underscored in the care of patients affected by severe acute respiratory syndrome coronavirus 2
(SARS-CoV-2) during the COVID-19 pandemic, as those who present with delirium experience worse hospital outcomes compared to those who do not. With its common occurrence, its frequently iatrogenic nature, and its close linkage to the processes of care, incident delirium can serve as a marker for the quality of hospital care and provides an important opportunity for quality improvement.
DEFINITION
The definition of and diagnostic criteria for delirium continue to evolve. Standardized criteria for delirium in the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5, 2013) represent the current diagnostic standard. These criteria are based on (A) a disturbance in attention and awareness; (B) an acute onset and fluctuating course; (C) an additional deficit in cognition (such as memory, orientation, language, or visuoperceptual ability); (D) impairments not better explained by dementia and do not occur in context of severely impaired level of consciousness or coma; and (E) evidence of an underlying medical etiology or multiple etiologies. Expert consensus was used to develop these criteria, however, and performance characteristics such as diagnostic sensitivity and specificity have not yet been reported for DSM-5 criteria. A standardized tool, the Confusion Assessment Method (CAM), provides a brief, validated diagnostic algorithm that is currently in widespread use for identification of delirium. The CAM algorithm relies on the presence of acute onset and fluctuating course, inattention, and either disorganized thinking or altered level of consciousness. The algorithm has a sensitivity of 94% to 100%, specificity of 90% to 95%, and high interrater reliability. Given the uncertainty of diagnostic criteria for delirium, a critical area for future investigation is to establish more definitive criteria, including epidemiologic and phenomenologic evaluations assisted by advances in neuroimaging and other potential diagnostic marker tests.
Learning Objectives
Learn the epidemiology, pathophysiology, clinical presentations, evaluation, and management of delirium in older adults.
Understand the role of various predisposing and precipitating factors in increasing risk of older persons to delirium and associated prognosis and mortality.
Learn the special relationship between dementia and delirium and the role of certain medications in predisposing older adults to delirium.
Recognize that delirium is preventable in up to 50% of cases with proven effective nonpharmacologic approaches.
Gain a clear understanding of the specific indications and efficacy of various treatments, including pharmacologic and nonpharmacologic strategies commonly used to manage delirium.
Understand the latest concepts about special issues related to delirium, including the COVID-19 pandemic, patient preferences and decision making, nursing home care, and palliative and end-of- life care.
Key Clinical Points
Delirium is commonly encountered in older adults in various clinical settings and associated with significant morbidity and mortality, especially in intensive care units, inpatient settings, nursing homes, and following major medical illnesses or surgery.
Delirium is unrecognized in up to 70% of older patients and can lead to long-term functional and cognitive deficits.
The pathophysiology of delirium is currently unclear but posited to be the end result of multiple pathogenic pathways eventually culminating in the dysfunction of various neurotransmitters and major brain networks.
Delirium is commonly due to multiple causes and is preventable in up to 50% of cases through addressing as many predisposing and precipitating factors as possible.
Among the precipitating factors, decreased mobility is strongly associated with delirium, and medical equipment and devices may further contribute to immobilization.
Dementia is the underlying risk factor in up to two-thirds of cases of delirium and must be suspected in patients with slowly progressive cognitive and functional deficits.
Acute onset, varying levels of alertness, and inattention are cardinal features of delirium, and obtaining historical details from a close family member or friend is critical in making a correct diagnosis of delirium.
Nonpharmacologic strategies are the preferred treatment for delirium in older patients, and medications are reserved for more severe symptoms that affect either medical management or patient safety.
EPIDEMIOLOGY
Most of the epidemiologic studies of delirium involve hospitalized older patients, in whom the highest rates of delirium occur. Reported rates vary based on the subgroup of patients studied and the setting of care. Previous studies estimated the prevalence of delirium (present at the time of hospital admission) at 7% to 80% and the incidence of delirium (new cases arising during hospitalization) at 8% to 82%. The highest prevalence and incidence rates occur among ventilated intensive care unit patients. The incidence rates of delirium in high-risk hospital venues, such as the intensive care unit and surgical settings, range from 16% to 82% and 8% to 58%, respectively.
Delirium occurs in up to 48% of patients in nursing homes or postacute settings, and in up to 83% of all patients at the end of life. The rates of delirium in all older persons presenting to the emergency department in several studies have ranged from 8% to 27%. While less frequent in the community setting, delirium is an important presenting symptom to outpatient clinics and often heralds serious underlying disease. Delirium is often unrecognized. Previous studies have documented that clinicians fail to detect up to 70% to 85% of affected patients across all of these settings.
Furthermore, the presence of delirium portends a potentially poor prognosis. Delirium in the intensive care unit is associated with a fourfold increased risk of in-hospital mortality and a sixfold increased risk of mortality at 6 months. In the emergency department, delirium is associated with a sevenfold increased risk of mortality at 6 months. Longer lengths of stay, cognitive and functional sequelae lasting up to 1 year postoperatively, and institutionalization are also consequences of delirium.
PATHOPHYSIOLOGY
The fundamental pathophysiologic mechanisms of delirium remain unclear. Delirium is thought to represent a functional rather than structural lesion. The characteristic electroencephalographic (EEG) findings demonstrate global functional derangements and generalized slowing of cortical background (alpha) activity. It has been hypothesized that delirium is mediated via a final common pathway of different but interacting pathogenic mechanisms leading to dysfunction of multiple brain regions and neurotransmitter systems and
culminating in disruption of large-scale networks. Evidence for a single pathway is lacking, and it remains difficult to ascribe delirium to a distinct neurobiological mechanism. Another hypothesis which has gained favor is that delirium occurs in the setting of an acute stressor, such as surgery or sepsis, superimposed on an underlying brain vulnerability, such as cognitive impairment or frailty. This model suggests that as vulnerability increases, delirium can be triggered by relatively minor acute stressors. Numerous contributions to brain vulnerability have been suggested, such as structural lesions, vascular changes, alterations in brain connectivity, neuroinflammation, or neurodegeneration, and other age-related changes.
Evidence from EEG, evoked-potential studies, and neuroimaging studies in delirium suggest focal dysfunction localized to the prefrontal cortex, thalamus, basal ganglia, temporoparietal cortex, fusiform, and lingual gyri of the nondominant cortex. Studies using computed tomography (CT) or magnetic resonance imaging (MRI) have found lesions or structural abnormalities in the brains of patients with delirium. Single-photon emission computed tomography (SPECT) studies have shown that delirium is mostly associated with decreased cerebral blood flow, but these results have been variable.
Associated neurotransmitter abnormalities involve elevated brain dopaminergic function, reduced cholinergic function, or a relative imbalance of these systems. Serotonergic activity may interact to regulate or alter activity of these other two systems, and serotonin levels may be either increased or decreased. Extensive evidence supports the role of cholinergic deficiency. Acetylcholine plays a key role in consciousness and attentional processes. Given that delirium manifests as an acute confusional state often with alterations of consciousness, it is likely to have a cholinergic basis.
Anticholinergic drugs can induce delirium in humans and animals, and serum anticholinergic activity is increased in patients with delirium. Physostigmine can reverse delirium associated with anticholinergic drugs, and cholinesterase inhibitors appear to have some benefit even in cases of delirium that are not induced by drugs. Neurotransmitter systems can also be affected indirectly. For instance, in sepsis, the systemic inflammatory response triggers a cascade of local (brain) neuroinflammation, leading to endothelial activation, impaired blood flow, neuronal apoptosis, and neurotransmitter dysfunction. Neuroinflammation can lead to microglial overactivation, resulting in a neurotoxic response with further neuronal
injury. Animal studies have found that neurodegeneration causes priming of astrocytes and microglia, resulting in a greater neuroinflammatory response, as well as alterations in vasculature, including the blood-brain barrier, which may render the brain more vulnerable to circulating inflammatory molecules. The stress response associated with severe medical illness or surgery involves sympathetic and immune system activation, including increased activity of the hypothalamic-pituitary-adrenal axis with hypercortisolism, and release of cerebral cytokines that alter neurotransmitter systems, the thyroid axis, and modification of blood-brain barrier permeability. Age-related changes in central neurotransmission, stress management, hormonal regulation, and immune response may contribute to the increased vulnerability of older persons to delirium. The description of delirium as “acute brain failure”—involving multiple neural circuits, neurotransmitters, and brain regions—suggests that understanding delirium may help to elucidate the underlying mechanisms of brain functioning.
ETIOLOGY
The etiology of delirium is usually multifactorial. Among older persons, delirium results from the interrelationship between patient vulnerability (ie, predisposing factors) and the occurrence of noxious insults (ie, precipitating factors). For example, patients who are highly vulnerable to delirium at baseline (eg, such as patients with dementia or serious illness) can experience delirium after exposure to otherwise mild insults, such as a single dose of a sedative medication. Older patients with few predisposing factors (low baseline vulnerability) are relatively resistant, with precipitation of delirium only after exposure to multiple potentially noxious insults, such as general anesthesia, major surgery, multiple psychoactive medications, immobilization, and infection (Figure 58-1). Based on validated predictive models for delirium, the effects of multiple risk factors appear to be cumulative. Clinically, the importance of the multifactorial nature of delirium is that removal or treatment of one risk factor alone often fails to resolve delirium. Instead, addressing many or all of the predisposing and precipitating factors for delirium is often required before the delirium symptoms will improve.
FIGURE 58-1. Multifactorial model for delirium. The etiology of delirium involves a complex interrelationship between the patient’s underlying vulnerability or predisposing factors (left axis) and precipitating factors or noxious insults (right axis). For example, a patient with high vulnerability, such as with severe dementia, underlying severe illness, or hearing or vision impairment, might develop delirium with exposure to only one dose of a sleeping medication.
Conversely, a patient with low vulnerability would develop delirium only with exposure to many noxious insults, such as general anesthesia and major surgery, intensive care unit (ICU) stay, multiple psychoactive medications, and prolonged sleep deprivation.
Predisposing Factors
Predisposing factors for delirium include preexisting cognitive impairment or dementia, a history of delirium, advanced age (> 70 years), severe underlying illness and multimorbidity, functional impairment, depression, alcohol abuse, a history of stroke, hypertension or transient ischemic attack, carotid artery disease and sensory impairments (vision or hearing) (Table 58-1). Preexisting cognitive impairment, including dementia, is one of the most powerful and consistent risk factors for delirium demonstrated across multiple studies in various settings, with patients with dementia having a two- to fivefold increased risk for delirium. Up to two thirds of delirious
patients have underlying dementia. Nearly any chronic medical condition can predispose a patient to delirium, ranging from diseases involving the central nervous system to diseases outside the central nervous system, including
infectious, metabolic, cardiac, pulmonary, endocrine, or neoplastic etiologies. Predictive risk models that identify predisposing factors in populations, such as general medicine, intensive care, surgical patients (cardiac and noncardiac), cancer patients, and nursing home residents, can help identify patients at an increased risk of delirium.
TABLE 58-1 ■ PREDISPOSING AND PRECIPITATING FACTORS FOR DELIRIUM FROM VALIDATED PREDICTIVE MODELS
P.ן•edisposing factoו�s
Dernentia.,01·c,o,gtר,iti,ve i111pai1·ment
C,onוo,rb.idi·ty/se ei·ity of illn .ss
Depr s ion
isi:011and/or hea1·ing irוipairגגןent
Fuז.1ctior1al impai1·.n1e,ו1t
Hi tory oftransi nt ischemia 01· stroke
History of alco.J:101abuse
History ofl1ypertension
Ca1:o id art r·y disease
Hist·ory o,f delirium
Age > 70
Precipitating fa toז-.s
Drוגg (polypharנnacy" psychoactive medicatioת s-edat'·ve )
hyp11otic·.)
Use o,f plרysical 1·estrai11ts
I 1dw.lltng bla,dder cathe,t
Ph.ysiolo,gic
o El va-t d BU /c at"n1n at",o
0 El vated ei·וגm Uiea
0 Abnormal serum alburnin
0 A:bnormal sodium,.glucose o potassiwn
0 M ,tabolic acido is
Iג1·fectioת
Iat1·,oge11ic 001npl1cattoמ.s
Majo,r surgical procedure:(eg) aortic a:m.eurysm repair) 11:oג1- cardiacי 11.oracic su,]("gery1 and 11ern,osשrgery)
Trauנna admis.si,011
Urgeת:t admissio,n
"Coma
ICU stay> 10,day
Precipitating Factors
Major precipitating factors identified in validated predictive models include medication use (see section on “Drug Use and Delirium”), which are associated with up to a fivefold increased risk of delirium, use of indwelling bladder catheters, use of physical restraints, dehydration, malnutrition, iatrogenic events, infections, metabolic and electrolyte derangements, surgery, admissions that are urgent or involve trauma, extended ICU stays (> 10 days), and coma (see Table 58-1). Decreased mobility is strongly associated with delirium and concomitant functional decline. The use of medical equipment and devices (eg, indwelling bladder catheters and physical restraints) may further contribute to immobilization. Major iatrogenic events occur in up to 40% of older hospitalized adults (three to five times the risk when compared with adults younger than 65 years) and double the risk for development of delirium. Examples include complications related to diagnostic or therapeutic procedures, allergic reactions, and bleeding caused by over-anticoagulation. Disorders of any major organ system, particularly renal or hepatic failure, can precipitate delirium. Occult respiratory failure has emerged as an increasing problem in older patients, who often lack the typical signs and symptoms of dyspnea and tachypnea. In older adults, acute myocardial infarction and congestive heart failure may present with delirium or “failure to thrive” as the cardinal feature, and minimal symptoms of angina or dyspnea. Occult infection is a particularly noteworthy cause of delirium because older patients may not present with leukocytosis or a typical febrile response. Metabolic and endocrinologic disorders, such as hyper- or hyponatremia, hypercalcemia, acid-base disorders, hypo- and hyperglycemia, and thyroid or adrenal disorders, may also contribute to delirium. Precipitating factors for delirium in hospitalized older patients that have been validated include use of physical restraints, malnutrition, more than three medications added during the previous day (> 70% of these were psychoactive drugs), indwelling bladder catheter, and any iatrogenic event. The presence of each of these independent factors confers a two- to fourfold increased risk of delirium. The presence of multiple factors has a cumulative effect, yet each risk factor is potentially modifiable.
Drug Use and Delirium
In 30% or more of delirium cases, use of one or more specific medications contributes to its development. While medications often incite delirium, they
are also the most common remediable cause of delirium. The most common culprit medications have psychoactive effects, such as sedative hypnotics, anxiolytics, narcotics, and medications with anticholinergic activity (Table 58-2). In previous studies, use of any psychoactive medication was associated with a fourfold increased risk of delirium; use of two or more psychoactive medications was associated with a fivefold increased risk.
Sedative-hypnotic drugs are associated with a 3- to 12-fold increased risk of delirium; narcotics with a threefold risk; and anticholinergic drugs with a 5- to 12-fold risk. The incidence of delirium, similar to other adverse drug events, increases in direct proportion to the number of medications prescribed because of the effects of the medications themselves and the increased risk of drug–drug and drug–disease interactions. Suboptimal medication management, ranging from inappropriate use to overuse of psychoactive medications, occurs commonly in older adults, suggesting that many cases of delirium and related adverse drug events may be preventable.
TABLE 58-2 ■ MEDICATIONS ASSOCIATED WITH INDUCING OR WORSENING OF DELIRIUM (AMERICAN GERIATRICS SOCIETY BEERS CRITERIA, 2019)
Relationship Between Delirium and Dementia
Delirium and dementia frequently coexist, with dementia being a leading risk factor for delirium and delirium resulting in worsened cognitive functioning. The contribution of delirium to permanent cognitive impairment or dementia is an area of active research, given the fact that after delirium, some patients never recover to their baseline level of cognitive function. Delirium and dementia may represent two ends along a spectrum of cognitive impairment with “chronic delirium” and “reversible dementia” falling along a continuum. Dementia is the leading risk factor for delirium, and fully two-thirds of cases of delirium occur in patients with dementia. Studies have shown that delirium and dementia are both associated with decreased cerebral metabolism, cholinergic deficiency, inflammation and abnormal glucose metabolism, reflecting their overlapping clinical, metabolic, and cellular mechanisms.
Delirium can alter the course of underlying dementia, with dramatic worsening of the trajectory of cognitive decline, resulting in more rapid progression of functional losses and worsened long-term outcomes including hospitalization and mortality. Additionally, postoperative cognitive decline is accelerated among patients with delirium.
PRESENTATION
Cardinal Features
Acute onset and inattention are the central features of delirium. Determining the acuity of onset requires accurate knowledge of the patient’s prior cognitive status and often entails obtaining historical information from another close observer, such as a family member, caregiver, or nurse. With delirium, the mental status changes typically occur over hours to days, in contrast to the changes that occur with dementia, which present insidiously over weeks to months. Another key feature is the fluctuating course of delirium; symptoms tend to wax and wane in severity over a 24-hour period. Lucid intervals are characteristic, and the reversibility of symptoms within a short time can deceive even an experienced clinician. Inattention is manifested as difficulty focusing, maintaining, and shifting attention or concentration. With simple cognitive assessment, patients may display difficulty with straightforward repetition tasks, digit spans, or recitation of the months of the year backward. Delirious patients appear easily distracted, experience difficulty with multistep commands, cannot follow the flow of a
conversation, and often perseverate with an answer to a previous question. Additional major features include a disorganization of thought and altered level of consciousness. Disorganized thoughts are a manifestation of underlying cognitive or perceptual disturbances and can be recognized by disjointed and incoherent speech or an unclear or illogical flow of ideas.
Clouding of consciousness is typically manifested by lethargy, with a reduced awareness of the environment that may show diurnal variation. Although not cardinal elements, other frequently associated features include disorientation (more commonly to time and place than to self), cognitive impairments (eg, memory and problem-solving deficits, dysnomia), psychomotor agitation or retardation, perceptual disturbances (eg, hallucinations, misperceptions, illusions), paranoid delusions, emotional lability, and sleep-wake cycle disruption.
Classification of Delirium
The clinical presentation of delirium can take three main forms: hypoactive, hyperactive, or mixed. The hypoactive form of delirium is characterized by lethargy and reduced psychomotor functioning and is the more common form in older patients. Hypoactive delirium often goes unrecognized and carries an overall poorer prognosis. The reduced level of patient activity associated with hypoactive delirium, often attributed to low mood or fatigue, may contribute to its misdiagnosis or underrecognition. By contrast, the hyperactive form of delirium presents with symptoms of agitation, increased vigilance, and often concomitant hallucinations; its presentation rarely remains unnoticed by caregivers or clinicians. Patients can fluctuate between the hypoactive and hyperactive forms—the mixed type of delirium— presenting a challenge in distinguishing the presentation from other psychotic or mood disorders. Recognition of partial or subsyndromal forms of delirium has brought attention to the persistence of symptoms among older patients, particularly during the resolution stages of delirium. Partial forms of delirium also adversely influence long-term clinical outcomes.
Prognosis
Delirium is an important independent determinant of prolonged length of hospital stay, increased mortality, increased rates of nursing home placement, and functional and cognitive decline. Delirium has long been thought to be a reversible, transient condition; however, accumulating evidence brings this
into question. Delirium symptoms generally persist for a month or more; as few as 20% of patients attain complete symptom resolution at 6-month follow-up. Cognitive function is impacted for up to a year following delirium, and patients who develop delirium are at increased risk for development of dementia. The chronic detrimental effects are likely related to the duration, severity, and underlying cause(s) of the delirium in addition to the baseline vulnerability of the patient.
EVALUATION
There are numerous instruments for the identification of delirium. Each delirium instrument has strengths and limitations, and the choice among them depends on the goals for use. The most widely used is the CAM, of which the four-item short form has been applied in over 10,000 studies to date and translated into over 19 languages. The CAM has been adapted for use in other settings, including the intensive care unit (CAM-ICU), nursing home (NH-CAM), and emergency department (CAM-ED and B-CAM). The CAM- S derived from the CAM can be used to rate delirium severity and has demonstrated predictive validity for relevant clinical outcomes. Designed for clinicians with minimal training, several brief screening tools have been validated in postsurgical, medical, emergency department, and postacute care settings. For example, the Ultra-Brief CAM (UB-CAM) requires just 1 minute to complete and can identify delirium with high sensitivity and specificity. Selected screening tools are presented in Table 58-3. These can be used as an initial step in delirium detection and should be followed by a comprehensive assessment. Additionally, family-informed tools can be completed by health care professionals and/or families, yielding sensitivities ranging from 67% to 90% and specificities ranging from 56% to 90% in diverse populations.
TABLE 58-3 ■ SELECTED DELIRIUM SCREENING TESTS
The acute evaluation of suspected or confirmed delirium centers on three main tasks that occur simultaneously: (1) establishing the diagnosis of delirium; (2) determining the potential cause(s) and ruling out life-threatening contributors; and (3) managing the symptoms while assuring patient safety.
Delirium is a clinical diagnosis, relying on astute observation at the bedside, careful cognitive assessment, and history-taking from a knowledgeable informant to establish a change from the patient’s baseline functioning.
Identifying the potentially multifactorial contributors to the delirium is of paramount importance. Many of these factors are treatable, and if left untreated, may result in substantial morbidity and mortality. Because the potential contributors are myriad, the search requires a thorough medical evaluation guided by clinical judgment. The challenge is enhanced by the frequently nonspecific or atypical presentation of the underlying illness in older persons. In fact, delirium is often the only sign of life-threatening illness, such as sepsis, pneumonia, or myocardial infarction in older persons.
History and Physical Examination
A thorough history and physical examination constitute the foundation of the medical evaluation of suspected delirium. The first step in evaluation should be to establish the diagnosis of delirium through careful cognitive assessment and to determine the acuity of change from the patient’s baseline cognitive state. Because cognitive impairment may easily be missed during routine conversation, brief cognitive screening tests, such as the Mini-Cog test or the UB-CAM assessment, should be used to rate the CAM. The degree of attention should be further assessed with simple tests such as a digit span (inattention indicated by an inability to repeat five digits forward or three digits backward) or recitation of the months of the year backward. A targeted history, focusing on baseline cognitive status and chronology of recent mental status changes, should be elicited from a reliable informant. Historical data including intercurrent illnesses, recent adjustments in medications, the possibility of withdrawal from alcohol, other substances, or medications, and pertinent environmental changes may elucidate precipitating factors of delirium.
The physical examination should comprise detailed review focusing on potential etiologic clues to an underlying or inciting disease process. Vital sign assessment is important to identify fever, tachycardia, or decreased oxygen saturation, each of which may point to specific disease processes. Auscultatory examination may suggest pneumonia or pulmonary effusion. A new cardiac murmur or dysrhythmia may suggest ischemia or congestive heart failure. Gastrointestinal examination should focus on evidence of an acute abdominal process, such as occult bleeding, perforated viscus, or infection. Patients with delirium may also demonstrate nonspecific focal findings on neurologic examination, such as asterixis or tremor. New focal neurologic deficits should raise suspicion of an acute cerebrovascular event or subdural hematoma. In many older patients and especially those with cognitive impairment, delirium may be the initial manifestation of a serious new disease process. Attention to early localizing signs on serial physical examinations is paramount.
A complete medication review, including over-the-counter medications, is critical. Any medications with known psychoactive effects should be discontinued or minimized whenever possible. Medications with potential for withdrawal should be tapered carefully. Because of pharmacodynamic and pharmacokinetic changes in aging adults, these medications may cause
deleterious psychoactive effects even when prescribed at customary doses and with serum drug levels that are within the “therapeutic range.”
Laboratory Tests and Imaging
Laboratory evaluation should be guided by clinical judgment and take into account specific patient characteristics and historical data. A thorough history and physical examination, medication review, focused laboratory testing (eg, complete blood count, chemistries, glucose, renal and liver function tests, urinalysis), and search for occult infection should help to identify the majority of potential contributors to the delirium. Additional laboratory testing such as thyroid function tests, B12 level, cortisol level,
drug levels or toxicology screen, syphilis serologies, and ammonia level should be based on the specific clinical presentation. Further diagnostic work-up with an electrocardiogram, chest radiograph, and/or arterial blood gas test may be appropriate for patients with pulmonary or cardiac conditions. The indications for cerebrospinal fluid examination, brain imaging, or EEG remain controversial. Their overall diagnostic yield is low, and these procedures are probably indicated in fewer than 5% to 10% of delirium cases. Lumbar puncture with cerebrospinal fluid examination is indicated for the febrile delirious patient when meningitis or encephalitis is suspected. Brain imaging (such as CT or MRI) should be reserved for cases with new focal neurologic signs, with history or signs of head trauma, or without another identifiable cause of the delirium. Of note, some neurologic symptoms are associated with delirium, including tremor and asterixis. EEG, which has a false-negative rate of 17% and a false-positive rate of 22% for distinguishing between delirious and nondelirious patients, plays a limited role and is most commonly employed to detect subclinical seizure disorders and to differentiate delirium from nonorganic psychiatric conditions.
Differential Diagnosis
Distinguishing a long-standing confusional state (dementia) from delirium alone, or from delirium superimposed on dementia, is an important, but often difficult, diagnostic step. These two conditions can be differentiated by the acute onset of symptoms in delirium, with dementia presenting much more insidiously and by the impaired attention and altered level of consciousness associated with delirium.
The differential diagnosis of delirium can be extensive and includes other psychiatric conditions such as depression and nonorganic psychiatric disorders (Table 58-4). Although perceptual disturbances, such as illusions and hallucinations, can occur with delirium in about 15% of cases, recognition of the key features of acute onset, inattention, altered level of consciousness, and global cognitive impairment will enhance the identification of delirium. Differentiating among diagnoses is critical because delirium carries a more serious prognosis without proper evaluation and management. Treatment for certain conditions such as depression or affective disorders may involve use of drugs with anticholinergic activity, which could exacerbate an unrecognized case of delirium. At times, working through the differential diagnosis can be challenging, and the diagnosis of delirium may remain uncertain. Because of the potentially life-threatening nature of delirium, however, it is prudent to manage the patient as having delirium and search for underlying precipitants until further information can be obtained.
TABLE 58-4 ■ DIFFERENTIAL DIAGNOSIS OF ALTERED MENTAL STATUS
Algorithm for the Evaluation of Altered Mental Status
Figure 58-2 presents an algorithm for the evaluation of altered mental status in the older patient. The initial steps center on establishing the patient’s baseline cognitive functioning and the onset and timing of any cognitive changes. Chronic impairments, representing changes that occur over months to years, are most likely attributable to dementia, which should be evaluated accordingly (see Chapter 59). Acute alterations, representing abrupt deteriorations in mental status, occur over hours to weeks and may be superimposed on underlying dementia. They should be further evaluated with cognitive testing to establish the presence of delirium. In the absence of notable delirium features (see “Presentation” earlier in this chapter), subsequent evaluation should focus on the possibility of major depression, acute psychotic disorder, or other psychiatric disorders (see Chapters 65, 60, and 66).
FIGURE 58-2. Flowchart for evaluation of suspected delirium in an older person. ABG, arterial blood gas; B12, cyanocobalamin or vitamin B12 level; EEG, electroencephalography; IM, intramuscular; LP, lumbar puncture; PO, by mouth; TFTs, thyroid function tests (eg, T4, thyroid index, thyroid-stimulating hormone); UB-CAM, Ultra Brief Confusion Assessment Method.
PREVENTION
Primary prevention—preventing delirium before it develops—is the most effective strategy for reducing delirium and its associated adverse outcomes. Table 58-5 describes well-documented delirium risk factors and preventive interventions to address each risk factor. A controlled clinical trial demonstrated the effectiveness of a delirium prevention strategy targeted toward these risk factors, which were selected based on their clinical relevance and the degree to which they could be modified by employing
practical and feasible interventions. Compared with standard care, implementation of these preventive interventions resulted in a 40% risk reduction for delirium in hospitalized older patients.
TABLE 58-5 ■ DELIRIUM RISK FACTORS AND TESTED PREVENTATIVE INTERVENTIONS
The Hospital Elder Life Program (HELP; now AGS CoCare HELP at https://help.agscocare.org/) represents an innovative strategy of hospital care for older patients, designed to incorporate the tested delirium prevention strategies and to improve overall quality of hospital care. Programs such as
HELP underscore the importance of an interdisciplinary team’s contributions to the prevention of delirium. For example, trained volunteers and family members can play roles in daily orientation, therapeutic recreation activities, and feeding assistance. Physical rehabilitation experts and nurses can assist with mobilization and the incorporation of daily exercises to prevent functional decline. Dietitians can help to maximize appropriate caloric intake and oral hydration in acutely ill patients. Consultant pharmacists, chaplains, and social workers also may provide specialized expertise to address issues pertinent to individuals at risk for delirium.
At least 14 studies have examined primary prevention with nonpharmacologic multicomponent approaches to delirium in controlled trials with prospective sampling frameworks and validated delirium assessments. These studies applied multifactorial interventions or educational strategies targeted toward health care professionals, staff, and families, and demonstrated significant reductions in delirium rates, in- hospital falls, health care–associated costs, and/or duration of delirium.
Proactive geriatric consultation has been demonstrated to reduce the risk of delirium post hip fracture by 40% in a randomized controlled trial. Another trial found that home rehabilitation after acute hospitalization of older adults was associated with lower risk of delirium and greater patient satisfaction, when compared with an institutional setting. In all, trials suggest that up to 50% of cases of delirium may be preventable and that prevention strategies should begin early during hospitalization.
Preventive efforts for delirium will require system-wide changes and large-scale shifts in local and national policies and approaches to care.
Recommended changes include routine cognitive and functional assessments on admission of all older patients, beginning in the emergency department setting; monitoring mental status as a “vital sign”; education of physicians and nurses to improve recognition and heighten awareness of the clinical implications; enhanced geriatric physician and nursing expertise; incentives to change practice patterns that lead to delirium (eg, immobilization, use of deliriogenic medications, bladder catheters, and physical restraints); and creation of systems that enhance high-quality geriatric care (eg, geriatric expertise, medication review, family involvement, case management, clinical pathways, and quality monitoring for delirium).
MANAGEMENT
Overview
The recommended management approach for all delirious patients begins with nonpharmacologic strategies, which usually result in successful symptom amelioration. In selected cases, such strategies must be supplemented with a pharmacologic approach, reserved for patients in whom delirium symptoms would result in interruption of needed medical therapies (eg, mechanical ventilation, central lines) or may endanger the safety of the patient or other persons. However, prescribing any drug requires balancing the benefits of delirium management against the potential for adverse medication effects because sedative drugs may prolong delirium and worsen clinical outcomes. The clinical team, family, and caregivers should understand that the choice of almost any medication may further cloud the patient’s mental status, prolong delirium symptoms, and obscure efforts to monitor the course of the mental status change. Any drug should be initiated at the lowest starting dose for the shortest time possible.
Nonpharmacologic Management
Nonpharmacologic approaches are the mainstays of prevention and treatment for every delirious patient. These include strategies for reorientation and behavioral intervention, such as ensuring the presence of family members, use of sitters, and transferring a disruptive patient to a private room or closer to the nurse’s station for increased supervision. Orienting influences such as calendars, clocks, and the day’s schedule should be prominently displayed, along with familiar personal objects from the patient’s home environment (eg, photographs and religious artifacts). Personal contact and communication are critical to reinforce patient awareness and encourage patient participation as much as possible. Communication should incorporate repeated reorientation strategies, clear instructions, and frequent eye contact. Correction of sensory impairments (ie, vision and hearing) should be maximized as applicable for individual patients by encouraging the use of eyeglasses and hearing aids during the hospital stay. Mobility and independence should be promoted; physical restraints should be avoided because they lead to decreased mobility, increased agitation, and greater risk of injury and worsening delirium. Patient involvement in self-care and decision making should also be encouraged. Other environmental interventions include limiting room and staff changes and providing a quiet patient care setting with low-level lighting at night. An environment with
decreased noise allowing for an uninterrupted period for sleep at night is of crucial importance in the management of delirium. This may require unit- wide changes in the coordination of nursing and medical procedures, including medication dispensing, vital sign recording, and administration of intravenous medications and other treatments. Hospital-wide changes may be needed to ensure a low level of noise at night, including minimizing hallway noise, overhead paging, and staff conversations. Family involvement in nonpharmacologic management of delirium is critical and has been shown to reduce length of stay and ameliorate anxiety in family members.
Nonpharmacologic Sleep Protocol
Nonpharmacologic approaches for relaxation and sleep can be effective for management of agitation in delirious patients and for prevention of delirium through minimization of psychoactive medications. The nonpharmacologic sleep protocol includes three components: (1) a glass of warm milk or herbal tea, (2) relaxation music or tapes, and (3) back massage. This protocol was demonstrated to be feasible and effective, reducing use of sleeping medications from 54% to 31% in a hospital environment.
Antipsychotics
As a last resort, antipsychotics are the preferred agents for pharmacologic treatment of delirium. Haloperidol is the agent with the longest track record, although its use may be complicated by extrapyramidal side effects and acute dystonias. Many trials examining the efficacy of haloperidol and the atypical antipsychotics (such as quetiapine, risperidone, and olanzapine) have been low quality and/or inconclusive, with no effect on delirium duration, severity, relief of symptoms, length of stay or mortality. A recent placebo- controlled, randomized trial of haloperidol and ziprasidone in the intensive care setting similarly failed to show a benefit for these medications.
Comparisons across antipsychotics have not found superior efficacy of any one agent. Additionally, there is evidence that antipsychotic drugs may prolong delirium and result in poor clinical outcomes. Moreover, official warnings have been issued regarding the increased mortality associated with the use of haloperidol and atypical antipsychotics in patients with dementia. Use of antipsychotics should be avoided in patients with Parkinson disease and Lewy body dementia.
If proceeding with antipsychotic administration, the intravenous route should be reserved for monitored settings due to the risk of torsades and sudden death. Parenteral administration is required in cases where rapid onset of action is required with short duration of action, whereas oral or intramuscular use is associated with a more optimal duration of action. The recommended starting dose is 0.25 mg of haloperidol orally or parenterally. The dose may be repeated every 30 minutes after vital signs have been rechecked. The clinical end point should be an awake but manageable patient, a goal that can be achieved by following the geriatric prescribing principle, “start low and go slow.” Most older patients naïve to prior treatment with an antipsychotic should require a total loading dose of no more than 2.5 mg of haloperidol. A subsequent maintenance dose consisting of one-half of the loading dose should be administered in divided doses over the next 24 hours, with doses tapered over the ensuing 48 hours as the agitation resolves. Alternatively, an atypical antipsychotic may be considered at a low starting dose: quetiapine (starting dose, 12.5 mg; 24-h maximum, 25 mg), olanzapine (starting dose, 2.5 mg; 24-h maximum, 10 mg), or risperidone (0.25–0.5 mg; 24-h maximum, 1.5 mg). Patients should be reevaluated continually to assess for ongoing need and tapered off as soon as possible.
Other Pharmacologic Approaches
Benzodiazepines (eg, lorazepam) are not recommended as first-line agents in the treatment of delirium because of their propensity to cause oversedation and to exacerbate acute mental status changes. However, they remain the treatment of choice for delirium caused by seizures and alcohol- and medication-related withdrawal syndromes. While other drugs have been advocated for use in treatment of delirium, evaluation of their use has resulted in discrepant findings, and there is no consensus recommendation for their general use. Trials of the sedative dexmedetomidine in ventilated ICU patients found a reduction in delirium duration and length of ICU stay as well as better effectiveness and safety in haloperidol-resistant patients. Clonidine, an α2-agonist, has been shown to be safe, though no effect was detected on delirium. In randomized trials of melatonin and the melatonin receptor agonist ramelteon, the results have been mixed to date. Overall, data does not support the use of pharmacologic management of delirium, although the consensus in the field is for a limited role of medications for the treatment of
intractable distress and agitation in which nonpharmacologic strategies have failed.
SPECIAL ISSUES
COVID-19
The arrival of the SARS-CoV-2 virus and associated COVID-19 in early 2020 has culminated in a global health crisis. Although COVID-19 typically manifests as an influenza-like respiratory illness, early reports of neurologic symptoms included altered mental status. In one study of older adults presenting to the emergency department with COVID-19, delirium was the sixth most common presenting symptom, and in some cases occurred without the typical symptoms of COVID-19. Rates of delirium during hospitalization with COVID-19 range from 25% to 84%. Delirium may be more severe or prolonged due to social isolation and use of personal protective equipment, resulting in poor communication, reduced social interactions, limited reorienting of patients, and prolonged need for mechanical ventilation, with increased immobilization, depth of sedation, or use of second-line medications due to drug shortages. Nonpharmacologic interventions for delirium prevention have been adapted for COVID-19 and are available online at https://help.agscocare.org/chapter- abstract/chapter/H00107/H00107_PART001_002.
Patient Preference and Decision Making
Given acute fluctuations in attention and decision-making capacity, delirium presents formidable challenges to the ethical care of affected patients (see Chapters 7 and 26). Cognitive assessments in patients with suspected delirium help to ensure that patients can be involved in decision making whenever possible and that appropriate surrogate decision makers are involved in representing a patient’s wishes and understanding the risks and benefits of procedures and treatments. Because the patient may exhibit periods of lucidity in delirium, there may be times during which the decision- making and informed consent process can and must involve the patient. The clinician should be cognizant of ongoing subclinical manifestations of delirium, which may be important for both the long-term management and decision-making capacity of the patient.
Nursing Home Setting
For the postacute population receiving short-term rehabilitative care, persistent delirium after an acute hospitalization is a major concern. Prior studies demonstrated that 16% of admissions to postacute care met full CAM criteria for delirium, while another 50% demonstrated signs of subsyndromal delirium. Patients with delirium on admission to postacute care experience more complications such as falls, higher rehospitalization rates, and higher mortality. Of those admitted to postacute care with delirium, over 50% are still delirious 1 month later. Persistence of delirium prevents functional recovery in the postacute setting; only those patients whose delirium cleared within 2 weeks of admission recovered to their prehospitalization functional status. Persistent delirium is also associated with higher mortality.
The long-term care population represents a high-risk group for delirium, with a high prevalence of dementia and functional impairments. Incident delirium is common in this population, frequently heralding the onset of an acute illness that results in hospitalization and/or death. Nursing staffing ratios, high turnover, competing concerns, and the high prevalence of dementia make identification and prevention of delirium challenging in this setting. Nonetheless, these patients represent among the most vulnerable of older adults, and further attention to delirium in this setting is warranted.
Research in long-term care settings is challenging, and results are mixed in this area. A recent trial involving nonpharmacologic delirium prevention strategies in the nursing home setting did not prevent delirium or reduce delirium symptoms, with greater than expected improvement in both intervention and usual care groups. This finding underscores the need for further research into effective delirium prevention strategies in this setting.
Palliative and End-of-Life Care
Because delirium occurs in more than 80% of patients at the end of life, it is considered nearly inevitable in the terminal stages by most hospice care providers and may serve as a marker of approaching death. Establishing goals of care with the patient and family is a crucial step, including discussions about the potential causes of the delirium, intensity of medical evaluations considered appropriate, and the potential trade-off between alertness and adequate control of pain and agitation. Some patients may wish to preserve their ability to communicate as long as possible, while others may focus on comfort perhaps at the expense of alertness. Physicians must be cognizant that even in the terminal phase, many causes of delirium are
potentially reversible, and may be amenable to interventions (eg, medication adjustments, treatment of dehydration, hypoglycemia, or hypoxia) that may improve comfort and quality of life. However, the burdens of evaluation or treatment (eg, reduction in narcotic dose) may not be consistent with the goals for care. In all cases, symptom management should begin immediately, while evaluation is underway. Nonpharmacologic approaches should be instituted in all patients, with pharmacologic approaches for selected cases. Haloperidol remains the first-line therapy for delirium in terminally ill patients, although a recent randomized controlled trial did not support its use. In end-of-life care, sedation may be indicated as an additional therapy for management of severe agitated delirium in the terminally ill patient, which can cause considerable distress for the patient and family. Because sedation poses the risks of decreased meaningful interaction with family, increased confusion, and respiratory depression, this choice should be made in conjunction with the family according to the goals of care. If sedation is indicated, an agent that is short acting and easily titrated to effect is recommended. Lorazepam (starting dose 0.5–1.0 mg PO, IV, or SQ) is the recommended agent of choice.
FURTHER READING
American Geriatrics Society 2019 Beers Criteria Update Expert Panel.
American Geriatrics Society 2019 updated Beers criteria for potentially inappropriate medication use in older adults. J Am Geriatr Soc.
2019;67;674–694.
American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 5th ed. Washington, DC: American Psychiatric Association; 2013.
Fong TG, Davis D, Growdon ME, et al. The interface of delirium and dementia in older persons. Lancet Neurol. 2015;14:823–832.
Fong TG, Jones RN, Marcantonio ER, et al. Adverse outcomes after hospitalization and delirium in persons with Alzheimer disease. Ann Intern Med. 2012;156:848–856.
Girard TD, Exline MC, Carson SS, et al. Haloperidol and ziprasidone for treatment of delirium in critical illness. N Engl J Med. 2018;379:2506– 2516.
Hshieh TT, Yue J, Oh E, et al. Effectiveness of multi-component non- pharmacologic delirium interventions: a systematic review and meta- analysis. JAMA Intern Med. 2015;175:512–520.
Inouye SK, Bogardus ST Jr, Charpentier PA, et al. A clinical trial of a multicomponent intervention to prevent delirium in hospitalized older patients. N Engl J Med. 1999;340:669–676.
Inouye SK, Charpentier PA. Precipitating factors for delirium in hospitalized elderly persons: predictive model and inter-relationship with baseline vulnerability. JAMA. 1996;275:852–857.
Inouye SK, Marcantonio ER, Kosar CM, et al. The short- and long-term relationship between delirium and cognitive trajectory in older surgical patients. Alzheimers Dement J Alzheimers Assoc. 2016;12:766–775.
Inouye SK, van Dyck CH, Alessi CA, et al. Clarifying confusion: the confusion assessment method. A new method for detection of delirium. Ann Intern Med. 1990;113: 941–948.
Inouye SK, Westendorp RGJ, Saczynski J. Delirium in elderly people.
Lancet. 2014;383:911–922.
Leslie DL, Marcantonio ER, Zhang Y, et al. One-year health care costs associated with delirium in the elderly population. Arch Intern Med. 2008;168:27–32.
Marcantonio ER. Delirium in hospitalized older adults. N Engl J Med.
2017;377:1456–1466.
Marcantonio ER, Flacker JM, Wright RJ, et al. Reducing delirium after hip fracture: a randomized trial. J Am Geriatr Soc. 2001;49:516–522.
Marcantonio ER, Goldman L, Mangione CM, et al. A clinical prediction rule for delirium after elective non-cardiac surgery. JAMA. 1994;271:134– 139.
Motyl CM, Ngo L, Zhou W, et al. Comparative accuracy and efficiency of four delirium screening protocols. J Am Geriatr Soc. 2020;68:2572– 2578.
Oh ES, Fong TG, Hshieh TT, Inouye SK. Delirium in older persons: advances in diagnosis and treatment (systematic review). JAMA. 2017;318:1161–1174.
Oh ES, Needham DM, Nikooie R, et al. Antipsychotics for preventing delirium in hospitalized adults: a systematic review. Ann Intern Med. 2019;171:474–484.
O’Mahony R, Murthy L, Akunne A, Young J. Synopsis of the National Institute for Health and Clinical Excellence guideline for prevention of delirium. Ann Intern Med. 2011;154:746–751.
Saczynski JS, Marcantonio ER, Quach L, et al. Cognitive trajectories after post-operative delirium. N Engl J Med. 2012;367:30–39.
Wilson JE, Mart MF, Cunningham C, et al. Delirium. Nat Rev Dis Primer.
2020;6:1–26.
Chapter
Dementia Including Alzheimer Disease
Cynthia M. Carlsson, Nathaniel A. Chin, Carey E. Gleason, Luigi Puglielli, Sanjay Asthana
Alzheimer disease (AD) is the most common neurodegenerative disorder affecting older adults, projected to affect more than 13 million Americans and 115 million individuals worldwide by 2050. Compared to projections in high-income countries, the number of individuals with AD in low- and middle-income nations is increasing at an even greater rate. The disease is characterized by diffuse functional and structural abnormalities in the brain that lead to progressive cognitive and behavioral deficits and functional decline. AD is associated with significant morbidity and mortality and is currently the sixth most common cause of death in the United States. The physical, psychological, functional, and socioeconomic impact of AD substantially affects the well-being and quality of life of patients and their caregivers. Caring for patients with AD places heavy financial burden on patients, families, communities, and the health care system at large. In the United States in 2020, the average lifetime cost of caring for a person with AD exceeded $350,000. The total cost of caring for Americans with AD exceeds $355 billion annually. Evidence is beginning to emerge on the economic impact of dementia care in low- and middle-income countries as most of the costs in these nations are related to informal care.
Recognizing the enormity of the burden of AD, international collaborations between clinicians, researchers, policy makers, patient advocacy groups, the media, and many others have increased public awareness of the global impact of the disease and have laid the foundation for the development of effective preventive and therapeutic strategies as well as improvements in care management for patients with AD. An example of
such a coordinated effort is the 2011 United States National Alzheimer’s Project Act (NAPA), a law designed to create and maintain an integrated national plan to address AD. The plan encompasses federal coordination of AD research and services and aims to improve early diagnosis and coordination of care, accelerate development of effective treatments, promote health equity in AD care among ethnic and racial minority populations, and stimulate coordination with international groups to address AD globally. Such national and international collaborations will help accelerate optimal diagnosis and care of patients at risk for AD and related dementias. Another example is the public health prevention effort underway to address 12 recognized modifiable risk factors that contribute to dementia and recommend policy interventions to mitigate these risks. In the United States, this is the focus of the Alzheimer’s Association’s Centers for Disease Control and Prevention: Building Our Largest Dementia (BOLD) Public Health Center of Excellence on Dementia Risk Reduction supported by the BOLD Infrastructure for Alzheimer’s Act—PL115-406.
Learning Objectives
Describe the current diagnostic criteria for dementia, Alzheimer disease (AD), and mild cognitive impairment (MCI), and how these conditions differ from normal cognitive aging.
Understand the effects of age and other genetic and nongenetic risk factors on risk of developing AD.
Identify key neuropathologic features and mechanistic pathways associated with AD.
Recognize common reversible causes of cognitive dysfunction.
Key Clinical Points
AD is the most common neurodegenerative disorder affecting older adults with prevalence rates increasing with advancing age.
While aging is the most established risk factor for late-onset AD, various other genetic, lifestyle, and environmental factors also influence dementia risk.
Describe an effective dementia care management plan across care settings and stages of disease, integrating use of pharmacologic and nonpharmacologic interventions, education, and community resources.
The diagnostic evaluation for dementia, AD, and MCI depends heavily on a careful assessment of an individual’s change in functional status, a structured cognitive assessment, a thorough clinical examination, and exclusion of other competing causes of cognitive decline.
There are currently no proven preventive or disease-modifying therapies for AD; however, aducanumab is the first FDA- approved medication that reduces amyloid burden in the brain, but without significant improvement in cognition. The current standard-of-care management plans integrate use of pharmacologic therapies to delay symptom progression; nonpharmacologic strategies to optimize function, behavior, and safety; and education and support for patients and their care partners.
Advanced care planning prior to loss of decisional capacity is of critical importance in developing patient-centered goals of care in persons with cognitive impairment.
DEFINITION
In defining AD features, it is widely recognized that the clinical cognitive and behavioral signs and symptoms do not always correlate with the degree of AD neuropathologic changes noted in the brain. The discrepancy between the neuropathologic changes and the individual clinical expression of disease is likely related to additional unidentified physiologic, metabolic, or genetic factors that either accelerate or slow cognitive decline. For example, some older adults with normal cognitive function just prior to death have been found to have significant AD neuropathology on autopsy. These individuals may have unrecognized neuroprotective factors that help preserve cognitive function despite notable neuropathologic changes. Thus, in order to disentangle the clinical syndrome from the neuropathologic changes, the current AD core clinical criteria are distinct from the AD neuropathologic guidelines, yet encourage clinicians and researchers to postulate the most likely neuropathology underlying the clinical presentation of disease.
In 2011, the National Institute on Aging and the Alzheimer’s Association (NIA-AA) released cosponsored revised clinical diagnostic guidelines for dementia, dementia due to AD, MCI, and a theoretical framework for defining the preclinical stages of AD. Core clinical diagnostic criteria for dementia, AD, and MCI were designed for use in all clinical settings and are summarized in Tables 59-1 and 59-2. In 2013, the American Psychiatric Association published the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5). Within this edition, the term “dementia” was replaced with “major neurocognitive disorder” and the term “mild cognitive impairment” with “mild neurocognitive disorder.” While the DSM- 5 and NIA-AA terminologies differ, the diagnostic criteria for major neurocognitive disorder and dementia as well as those for mild neurocognitive disorder and MCI are nearly identical (see Tables 59-1 and 59-2) and, thus, in most circumstances are interchangeable. For simplicity, this chapter uses the terms “dementia” and “MCI.”
TABLE 59-1 ■ NIA-AA CORE CLINICAL DIAGNOSTIC CRITERIA FOR ALL-CAUSE DEMENTIA AND DEMENTIA DUE TO ALZHEIMER DISEASE
EPIDEMIOLOGY
AD is the most common cause of dementia in older adults, currently affecting more than 6 million Americans. Worldwide more than 44 million individuals currently have AD or a related dementia. Unless effective preventive strategies are identified, it is anticipated that the prevalence of AD will double every 20 years. The United Nations predicts that the major rate of increase in the prevalence of AD will likely occur in developing countries that may not possess the essential resources, public health support system, or medical expertise to care for patients with AD. There is clear evidence that a
number of risk factors significantly enhance the overall risk for developing AD. These risk factors relate to both genetic and nongenetic markers and are discussed below.
Aging
Age is the single most important and validated risk factor for AD. Epidemiologic studies indicate that the incidence and prevalence of AD both increase with age. Based on data from the 2010 US Census, the prevalence of AD was approximately 3% among adults between the ages of 65 to 74, 17% in persons aged 75 to 84, and 32% in individuals age 85 and older. With the average human lifespan increasing, the prevalence of AD is expected to accelerate at an even greater rate in coming decades. Although not clearly understood, converging research findings provide clues concerning the potential molecular pathway(s) underlying the association between aging and AD. Increases in the pathologic hallmarks of AD, notably amyloid plaques and neurofibrillary tangles, have been noted in the brains of older adults.
Age-related changes in molecular pathways involving insulin-like growth factor 1 receptor (IGF1-R), neurotrophin signaling, β-site amyloid precursor protein cleaving enzyme 1 (BACE1), and amyloid precursor protein (APP) metabolism may account for some of the increase in incidence and prevalence of AD with aging. Additionally, aging and IGF1-R signaling are both associated with cerebrovascular dysfunction, which may play a key role in the development of AD. An increased exposure time to age-dependent vascular risk factors or an interaction between aging and vascular risk factors may in part account for the effects of aging on the pathobiology of AD.
Apolipoprotein E Genotype
Late-onset AD is the most common form of the disorder, accounting for greater than 95% of all AD cases. Although some cases of younger-onset AD have strong links to the genes coding for APP and presenilin 1 (PSEN1) and 2 (PSEN2) proteins, many cases of late-onset AD are seen in individuals without any clear genetic predisposition. A common polymorphism in the apolipoprotein E (APOE) gene is the major determinant of risk in families with late-onset AD. Of the three allelic forms (ε2, ε3, and ε4), AD risk is increased fourfold in individuals with at least one ε4 allele and 12-fold in persons with two copies of the ε4 allele. While ε4 genotype modifies an
individual’s risk of the disease, by itself it is neither necessary nor sufficient for the development of AD. In the Framingham study, 55% of ε4 homozygote carriers, 27% of ε4 heterozygote carriers, and 9% of noncarriers developed AD by age 85. APOE ε4 genotype may contribute to AD by influencing processes related to the development of AD, including altering the rate of production, clearance, or aggregation of amyloid β-peptide and/or influencing cerebral cholesterol metabolism and inflammation.
Vascular Risk Factors
Midlife vascular risk factors, including hypercholesterolemia, hypertension, diabetes mellitus, metabolic syndrome, obesity, and physical inactivity, have all been associated with a greater risk of developing AD in later life. High midlife total cholesterol and blood pressure levels are associated with a two- to nearly threefold increased risk of developing AD decades later and may convey an even greater risk than that caused by APOE ε4 allele.
Abnormal cholesterol metabolism is related to APOE ε4 allele, suggesting that some of the adverse effects of this genotype on AD risk may be partially mediated through lipoprotein dysregulation. In a community-based cohort study, higher glucose levels were associated with an increased risk of dementia in populations both with and without diabetes mellitus. Metabolic syndrome is also associated with increased risk for AD, although this cluster of risk factors is more consistently related to greater risk of vascular dementia. Midlife obesity (RR 1.60, 95% CI 1.34–1.92) and physical inactivity (RR 1.82, 95% CI 1.19–2.78) are interrelated and both independently increase the risk for developing AD in late life. With more than 35% of current US adults meeting criteria for obesity, there is concern that this risk factor could further accelerate projected increases in AD incidence rates over the coming decades.
Studies support that vascular factors exert an independent additive effect on AD risk. The presence of multiple cardiovascular risk factors at midlife substantially increases the risk of late-life dementia in a dose-dependent manner. The positive corollary to these findings is that about a third of AD cases worldwide might be attributable to potentially modifiable risk factors, thus, providing a target for preventive strategies. Vascular risk factors exert their adverse effects on AD pathology through a variety of mechanisms, including modulation of β-amyloid (Aβ) metabolism, effects on insulin receptors, blood-brain barrier (BBB) integrity, endothelial dysfunction, and
cerebral blood flow. These vascular-mediated changes subsequently lead to tissue hypoxia, increased oxidative stress, inflammation, and cognitive decline.
Traumatic Brain Injury
There is increasing epidemiologic evidence that moderate or severe traumatic brain injury (TBI) is a risk factor for AD in late life and may precipitate earlier onset of the disease. In longitudinal studies, the magnitude of AD risk increases with TBI severity. Compared to controls, World War II veterans with moderate TBI were twice as likely to develop AD, whereas the risk was fourfold in veterans with severe TBI with loss of consciousness. Neuropathologic examination of brains from patients with a history of head trauma generally reveals changes of diffuse amyloid plaques together with tau pathology, inflammatory response, and loss of cholinergic neurons. These pathologic changes may be related to transient upregulation of BACE1 together with increased generation of Aβ. These features are accompanied by tau hyperphosphorylation and increased caspase-mediated cleavage of APP. Thus, head trauma may lead to AD by triggering accelerated neurodegeneration.
Newer evidence demonstrates that recurrent mild TBI, including both concussive and subconcussive injuries, may also contribute to future risk of cognitive decline. However, it has been difficult to establish risk estimates of the impact of repetitive mild TBI on risk for AD due to a variety of methodologic challenges. The high frequency of concussive and subconcussive injuries, the variability in definitions and measurements of mild TBI, the heterogeneity of injuries among various cohorts (ie, military combat veterans vs contact sport athletes), and selection and recall biases have complicated research of this area. Repetitive concussive injuries may also lead to chronic traumatic encephalopathy (CTE), a condition that is neuropathologically distinct from AD. Symptoms of CTE frequently include headaches and disturbances in attention or concentration and depression— Chapter 64 provides additional details on CTE. Research is underway to clarify the varying types and severity of TBI and the effects of such injuries on risk for posttraumatic neurodegeneration.
Depression
More than 30% of patients with AD develop depression during the course of their illness, and some may present with depressive symptoms as their first clinical manifestation of underlying AD. While depression has long been recognized as a common psychiatric condition in older adults that may mimic dementia, depression is likely also a risk factor for AD. Findings of a meta- analysis involving over 20 population-based prospective studies supported an increased risk of AD in patients with a history of late-life depression (pooled risk OR [95% CI] 1.65 [1.42–1.92]). To date, the precise mechanisms underlying the association between depression and enhanced AD risk are unknown. Several potential mechanisms have been proposed that are common to both AD and depression, including elevated levels of cytokines, increased vascular risk factors, and the potential role of APOE4 allele. More research is needed to better understand the biological basis of increased risk of AD in patients with a history of depression.
Race and Ethnicity
The assessment of differences in AD prevalence rates across geographic regions worldwide and among various racial and ethnic groups has proven to be challenging. Differences in education, literacy, life expectancy, access to health care, nutrition, social stressors, vascular risk factors, and cultural beliefs in what is considered normal aging can all influence AD prevalence estimates. In the Indianapolis/Ibadan studies, the incidence and prevalence of AD were significantly lower among Africans in Ibadan, Nigeria, than among age-matched African Americans in Indianapolis, suggesting that differences in environmental factors may play a larger role than race in influencing the development of AD. The significant influence of environmental factors on AD risk is also supported by data showing that migrant populations tend to have dementia rates that fall between those seen in their homeland and adopted countries. Standardized approaches to case ascertainment of dementia and statistical comparisons across nations have been implemented to better assess variations in prevalence rates among low-, middle-, and
high-income countries. These approaches have produced age-adjusted dementia prevalence estimates of approximately 5% to 9% in people older than age 60 across global regions.
Studies assessing ethnic and racial variations in dementia rates within countries have identified some group differences in AD incidence and prevalence. In a population-based study in the Washington Heights and
Inwood communities of New York City, the cumulative incidence of AD was increased twofold among individuals of African-American and Caribbean Hispanic origin. The group differences in AD incidence did not change following corrections for differences in years of education or history of vascular risk factors. In a study in Houston, Texas, both the incidence and prevalence of AD were higher among older African-American and Hispanic individuals compared to non-Hispanic White adults. In Singapore, ethnic Malays and Indians had higher rates of dementia compared to ethnic Chinese, independent of vascular risk factors. While some research suggests there may be biological or genetic differences driving variations in AD risk, other studies support that these racial and ethnic group differences will not persist after rigorously accounting for important social, cultural, and environmental factors influencing risk of dementia.
Education
Low educational attainment, poor educational quality, and illiteracy have been shown to be associated with increased risk for AD. In a meta-analysis of 13 cohort and six case-control studies, low education had a pooled relative risk (RR) estimate for AD of 1.80 (95% CI 1.45–2.27) compared to high education, although the estimate from cohort studies (RR 1.59 [95% CI 1.35–1.86]) was significantly lower than the estimate based on case-control studies (2.40, [1.32–4.38]). Prospective cohort studies likely provide a more accurate assessment of the association between education and dementia since they allow for documentation of a decline from a previous level of cognitive performance. Some studies have found that education may be a marker of cognitive reserve as it modifies the association between AD neuropathology and level of cognitive function. For the same degree of brain pathology, persons with higher education demonstrate less cognitive impairment. In addition, higher levels of education may help individuals cope more effectively with cognitive changes. Access to higher levels of education may also be a marker of socioeconomic status, coexisting chronic diseases, access to health care resources, and premorbid intellectual abilities. Thus, while low educational attainment is associated with increased AD risk, it is not clear to what extent low education contributes to AD or whether early educational interventions will protect against the development of dementia.
Gender
There is some evidence that AD is more common among women, although study results are conflicting. In population-based studies, more than half reported a greater risk of AD in women, while the others found no difference. Some data support that estrogen deficiency following menopause may contribute to the development of AD; however, the effect of hormone replacement therapy on cognition remains controversial. The discrepant findings between studies assessing sex-based variations in dementia risk are likely due to methodologic differences in accounting for potential gender- related variability in life-expectancy, education, occupation, and lifestyle factors that can directly affect AD risk.
PATHOPHYSIOLOGY
Genetics of Alzheimer Disease
Based on the onset of symptoms, AD is normally divided into two groups: younger-onset (< 65 years) and late-onset (> 65 years) disease. Younger- onset patients include individuals with familial AD which accounts for between 1% and 5% of all AD cases and to date has been linked to mutations in the genes for the APP (gene name APP) on chromosome 21, presenilin 1 (PS1; gene name PSEN1) on chromosome 14, and presenilin 2 (PS2; gene name PSEN2) on chromosome 1. Among these genes, more than 250 different mutations have so far been identified, accounting for approximately 40% of all cases of familial AD, yet only 0.5% of AD cases overall. Most of the mutations (~ 200) are found in the PSEN1 gene and account for 78% of the familial AD mutations. APP mutations (~ 33) account for about 18% of younger-onset autosomal dominant cases and PSEN2 (~ 22 mutations) for about 4%. Familial AD is characterized by younger onset of cognitive symptoms (typically in the late 40s or early 50s), but is clinically indistinguishable from late-onset AD.
Late-onset AD, also called sporadic AD, accounts for greater than 95% of all cases of the disease. APOE is the only established susceptibility gene consistently found associated with late-onset AD in both case-control and genetic studies. APOE maps to chromosome 19 in a cluster with the genes encoding translocase of outer mitochondrial membrane 40 (TOMM40), apolipoprotein C1, and apolipoprotein C2. The APOE gene exists as three major alleles (ε2, ε3, and ε4) that encode three different ApoE isoforms: ApoE2, ApoE3, and ApoE4. Interestingly, these isoforms only differ in
amino acid sequences at either position 112 or 158 of the protein. The inheritance of the ε4 allele confers an increased risk for developing AD, while the ε2 allele confers protection. For example, presence of one copy of the ε4 allele increases risk of AD fourfold, whereas inheritance of two copies enhances the risk by 12-fold. However, unlike genetic mutations associated with familial AD, the presence of APOE ε4 alone is insufficient to cause AD without additional factors. Even though the first report of an association between APOE ε4 and AD was published decades ago, the precise molecular mechanisms underlying this association still remain elusive. It is currently unknown if the APOE4 allele influences the rate of production, clearance, or aggregation of Aβ peptide or whether it influences cholesterol metabolism and inflammation that reportedly play a major role in the pathobiology of AD.
With the advent of genome-wide association studies (GWASs), a number of new genetic loci with genome-wide significance have been identified. In addition to APOE4, more than 40 other polymorphisms have been associated with increased risk for late-onset AD. However, none of these associations has been uniformly confirmed in every population group studied to date.
Over 20 genetic loci have been associated with late-onset AD, leading to four main mechanisms: Aβ metabolism, lipid metabolism, immune response, and cell signaling. Further research is needed to clarify the impact of other genetic changes on AD risk, genetic-environmental interactions, and the impact of such genetic factors on mechanisms of neurodegeneration and neuroprotection.
Neuropathology of Alzheimer Disease
The neuropathologic hallmarks of AD include amyloid plaques, neurofibrillary tangles, and neuritic plaques (Figure 59-1). The latter are a subset of amyloid plaques that are closely associated with neuronal injury and occur with dystrophic neurites. Cerebral amyloid angiopathy (CAA) frequently co-occurs with amyloid plaques, resulting from deposition of Aβ into cerebral vessels. Sporadic CAA is observed in 80% to 90% of AD patients and may cause lobar intracerebral hemorrhages and microbleeds.
Together these processes contribute to loss of neurons and synapses in the neocortex, hippocampus, and other subcortical regions of the brain. The predominance of amyloid plaques versus neurofibrillary tangles or amyloid angiopathy can differ from one patient to another. However,
neuronal/synaptic loss is a constant feature and eventually the direct cause of dementia. The distribution of the disease pathology seems to follow a region- specific pattern with amyloid plaques being more prevalent in the neocortex and neuronal/synaptic loss being more prevalent in the hippocampus, posterior cingulate, and corpus callosum—areas of the brain closely involved with memory formation and higher cortical activities. Finally, brains of persons with AD are also characterized by a diffuse and widespread invasion of reactive astrocytes, mostly concentrated in the hippocampus and around areas of neuronal loss. These astrocytic changes are not specific to AD and can be observed in other neurodegenerative disorders associated with inflammation and neurotoxic insults.
FIGURE 59-1. Small section of the neocortex from a patient with Alzheimer disease showing two classical neuropathologic lesions of the disease. A. The modified silver staining shows one dense senile (amyloid) plaque indicated by three arrowheads. The plaque consists of aggregated extracellular deposits of amyloid β-peptide (Aβ) fragments surrounded by silver-positive dystrophic neurites. The arrow indicates a neuron containing neurofibrillary tangles, which appear as dark masses of abnormal filaments occupying most of the cytoplasm. B. The image shows higher magnification of two neurons containing neurofibrillary tangles (indicated by arrows). (Reproduced with permission from Shahriar Salamat, MD, PhD, University of Wisconsin School of Medicine and Public Health, Department of Pathology and Laboratory Medicine.)
The dominant component of the amyloid plaque core is Aβ organized in fibrils of approximately 7 to 10 nm intermixed with nonfibrillar forms of the peptide. Neuritic plaques are characterized by a dense core of aggregated fibrillar Aβ, surrounded by dystrophic dendrites and axons, activated
microglia, and reactive astrocytes. In addition, diffuse deposits of Aβ, likely representing a prefibrillary form of the aggregated peptide, are found without any surrounding dystrophic neurites, astrocytes, or microglia. These diffuse plaques can be found in limbic and association cortices, as well as in the cerebellum.
The other neuropathologic hallmark of AD is the presence of neurofibrillary tangles found exclusively in the cytoplasm of neurons (see Figure 59-1). The tangles appear as paired, helically twisted protein filaments composed of highly stable polymers of cytoplasmic proteins called tau. Tau comprises a group of alternatively spliced proteins found in the cytoplasm that possess either three or four microtubule-binding domains and can assemble with tubulin, thus helping the formation of cross bridges between adjacent microtubules. Tau proteins can be phosphorylated in multiple sites, and the degree of phosphorylation is inversely correlated with binding to microtubules. As a result, highly phosphorylated tau proteins dissociate from microtubules and polymerize into filaments forming neurofibrillary tangles. In addition to AD, the abnormal accumulation of filamentous tau is observed in frontotemporal forms of dementia, progressive supranuclear palsy, corticobasal degeneration, and Pick disease. Contrary to prior belief, tau proteins themselves can cause dementia, and multiple mutations in the tau gene have been found in frontotemporal dementia (FTD) with parkinsonism. The precise role of tau proteins in the pathogenesis of AD and their potential interaction with Aβ are still unclear.
In 2012, NIA and the Alzheimer’s Association published revised criteria for AD neuropathologic change. These criteria recommended reporting on the presence and extent of hallmark lesions for AD observed at autopsy independent of the individual’s cognitive state. These new guidelines took into account several well-established neuropathologic scoring criteria and integrated them into an “ABC score” based on three parameters (Amyloid, Braak, CERAD): criterion “A” ranks the Aβ plaque score (based on criteria from Thal et al.), criterion “B” measures the neurofibrillary tangle stage (modified from Braak criteria), and criterion “C” assesses the neuritic plaque score (modified from the Consortium to Establish a Registry for Alzheimer Disease [CERAD]). For reporting, these ABC scores are then transformed into one of four levels of neuropathic change: none, low, intermediate, or high. While CAA is not considered in the “ABC” score, the guidelines recognize that these changes frequently co-occur with
parenchymal Aβ plaques and recommend neuropathologists comment on such changes separately within the neuropathology report.
Amyloid Precursor Protein Processing and Generation of Aβ
Aβ is a 39 to 43 amino acid hydrophobic peptide proteolytically released from a much larger precursor, APP. Although APP is the major source of toxic Aβ, it also exerts several important functions in the nervous system, including serving as a cell-surface receptor, growth factor, protease inhibitor, cell–cell interaction molecule, coreceptor/partner in the endocytic/lysosomal network, coagulation inhibitor factor, cell-surface scaffold protein, kinesin- interacting molecule for axonal transport, and transcription factor. The generation of Aβ from APP (Figure 59-2) requires the sequential recruitment of two enzymatic activities: β-secretase, also called BACE1, and γ- secretase, a multimeric protein complex containing presenilin, nicastrin,
Aph-1, Pen-2, and CD147. The β-cleavage is the rate-limiting step and occurs before the γ-cleavage. It liberates a large N-terminal fragment of the protein (sβAPP) that is released in the extracellular milieu and a small (~12 kDa) membrane-anchored fragment called β-APP-CTF (or C99). The release of the large N-terminal domain allows subsequent γ-cleavage, and liberation of Aβ and the signaling of active intracellular domain (AICD) of APP (see Figure 59-2). Generation of Aβ40 and Aβ42 results from γ-cleavage of Aβ at positions 40 and 42, respectively. The release of Aβ in the extracellular milieu is followed by oligomerization and aggregation in the form of fibrils and amyloid plaques. Additionally, small Aβ aggregates are also found in the soma of the neurons suggesting that the Aβ fragments can escape secretion and aggregate in the intracellular environment. The molecular mechanisms underlying the toxicity of Aβ are still being investigated and currently incompletely understood. However, research seems to indicate that small Aβ aggregates (oligomers), which represent the “preplaque” neurotoxic species of Aβ, act as the proximate cause of neuronal injury and synaptic loss associated with AD. Additionally, the C-terminal tail of APP can undergo further processing at amino acid 664 of APP695 liberating two small cytosolic fragments, Jcasp and C31. Both of these fragments are generated only after γ-cleavage, require caspase-mediated processing of APP, and can activate proapoptotic pathways in a variety of cellular systems.
FIGURE 59-2. Generation of amyloid β-peptide (Aβ) from amyloid precursor protein (APP). APP is a type 1 membrane protein with a large extracellular domain, a single membrane- spanning domain, and a short cytoplasmic tail. The Aβ region of APP (in yellow) includes the first 12 to 14 amino acids of the membrane domain. (A) Shows a schematic image of APP on the cell surface of a neuron, whereas (B) provides a closer view of APP processing. The initial enzymatic step for the generation of Aβ requires proteolysis of APP at β-site (amino acid 1 of the Aβ region). This event liberates a large N-terminal fragment (sβAPP) that is rapidly secreted into the extracellular milieu and a small C-terminal fragment (β-APP-CTF) of 99 amino acids (also called C99). The removal of sβAPP most likely induces a conformational change that allows subsequent cleavage by γ-secretase. Once generated, the Aβ peptides aggregate in the brain in the form of plaques. Further cleavage of β-APP-CTF at the site liberates the signaling active APP intracellular domain (AICD). In addition to the above β/γ pathway, APP can also be cleaved at the α-site (between amino acids 16 and 17 of the Aβ region) precluding the generation of Aβ.
The most critical clinical link between Aβ and AD came from the observation that patients with Down syndrome (trisomy 21) had a higher propensity for developing a clinical and pathologic phenotype resembling AD, thereby suggesting a potential association between AD and chromosome
21. This observation was further strengthened by the fact that Aβ was the major component in plaques from both patients with Down syndrome and AD, and that its genesis was related to a gene (APP) located on chromosome 21, close to the obligate Down syndrome region. Following the identification of APP, several groups found mutations in the APP gene that were linked to familial forms of AD. Given that the duplication of the APP locus could result in early AD and that Down syndrome patients with partial trisomy 21 developed AD only when the trisomy was proximal to the APP locus, the potential direct relationship between APP metabolism and AD seems strong. Furthermore, causative mutations in the genes that encode for PS1 and PS2, which are also implicated in the metabolism of APP, have been found and are associated with familial forms of AD, thereby conferring additional strength to the linkage between APP/Aβ metabolism and AD.
Although the generation of Aβ from APP seems to be a pivotal step in the pathobiology of AD, it does not explain all the neuropathologic changes observed in patients with AD. For example, examination of the brain of transgenic mice expressing human APP harboring one or more familial AD- associated mutations reveals the presence of amyloid plaques and some synaptic loss and cognitive deficits, but absence of tau pathology and astrocytosis. This suggests that additional biochemical/molecular events are required to develop the full pathologic spectrum of AD. To circumvent this issue, several new animal models have been generated where human APP is accompanied by additional genes. These genes include the presenilins (harboring familial AD-associated mutations), tau, and APOE. Recently, several transgenic mice models harboring three or five familial AD- associated mutations (respectively called 3X and 5X mice) in two or more genes have been generated. All of these models demonstrate that Aβ is an essential element for the development of AD-like neuropathology and revealed a close relationship between Aβ and the phosphorylation/aggregation state of tau. However, none of the mouse models fully reproduce the classical AD phenotype, thereby again suggesting that Aβ seems to be necessary but not sufficient to produce the entire spectrum of AD neuropathology. Transgenic mice expressing the human microtubule-
associated protein tau develop the typical tau-related pathology found in individuals suffering from FTD with parkinsonism; however, they do not develop amyloid plaques, suggesting that tau is not required for the formation of plaques. Crossing these mice with APP transgenic mice potentiates tau- related pathology and neuronal loss but does not aggravate plaque pathology, suggesting that Aβ acts upstream of tau in the classical AD phenotype.
However, studies from patients with AD, mouse models, and ex vivo cellular systems indicate that Aβ and tau can interact synergistically, thereby fostering their respective aggregation and neuronal loss. Thus, the true relationship between Aβ and tau is more complex than previously thought and likely involves additional molecular and biochemical pathways acting upstream of both Aβ and tau production in the AD brain.
CLINICAL PRESENTATION
The most common clinical onset of AD is an amnestic presentation, characterized by slowly progressive memory loss for recent events. Patients with AD frequently have problems remembering recent conversations, dates, appointments, and may misplace items. Many patients are not aware of these deficits and are brought to medical attention by their family members or friends. For some patients, memory loss symptoms are first noted by others during a stressful life event, such as the patient’s hospitalization or the death of a spouse; however, a thorough interview frequently reveals that the cognitive deficits preceded such an event by months to years. The memory deficits of AD are generally differentiated from those caused by normal aging by the fact that AD-related deficits are progressive and interfere with the individual’s usual daily activities. Memory loss leading to a change in functional status is not a part of normal aging and warrants further evaluation.
Nonamnestic presentations of AD are also common and may include prominent initial impairments in language abilities, visuospatial skills, and executive function. As these presentations are less commonly recognized by patients, families, and clinicians alike as being early symptoms related to AD, individuals with nonamnestic presentations are frequently misdiagnosed or experience a delay in diagnosis. In addition to the more common amnestic presentation, nonamnestic presentations are specifically identified in the NIA-AA diagnostic criteria for AD (see Table 59-1). Patients who initially present with language impairment frequently will complain of marked word-
finding problems with subsequent progression to paraphasic errors and circumlocution. AD patients with a visuospatial presentation may have prominent deficits in spatial cognition, including poor object and face recognition, an inability to perceive multiple visual elements simultaneously, and difficulty understanding written language. Executive dysfunction is another common initial presenting symptom of AD, leading to impairments in reasoning, judgment, problem solving, and an inability to complete complex demanding tasks. Deficits in concentration and attention frequently occur in patients with AD, but these changes may also be notable in persons with depression, attention deficit disorder, sleep disorders, or adverse medication effects.
As the disease progresses, changes in personality are commonly seen in patients with AD and may include increased passivity, lack of interest, agitation, restlessness, and/or overactivity. AD patients may exhibit increased irritability when confronted with memory loss symptoms, such as when struggling to find a word, being reminded of a prior conversation or event, or searching for a misplaced item. More than 30% of persons with AD develop symptoms of depression, which may be the first clinical presentation of the disease. Early signs of depression in patients with AD include increased irritability, alterations in appetite or sleep, trouble concentrating or making decisions, low energy, social withdrawal, and a decline in physical function. Worsening of behavior and cognitive symptoms in the evening is also common in patients with AD and may be related to changes in circadian rhythm from loss of sunlight.
In the later stages of the disease, individuals may have increased confusion, dysphagia, impaired gait, and repeated falls. In some patients with AD, disruptive behaviors may increase with aggression, agitation, and physical or verbal hostility; in others, these behavioral symptoms lessen with disease progression. The majority of patients become increasingly frail and dependent for self-care and activities of daily living with many patients developing bowel and bladder incontinence. Persons in the late stages of AD may become immobile and bed-bound, which increases their risk of developing pressure sores, malnutrition, and dehydration. The most common causes of death in patients with AD include pneumonia, urinary sepsis, dehydration, pressure sores, fractures, and malnutrition. The median survival period from the time of diagnosis to death generally ranges from 7 to 10 years, although some patients, especially those with familial AD, die earlier.
EVALUATION
For many older patients with cognitive complaints, their evaluation, diagnosis, and management may be effectively completed within a primary care setting. If available, utilization of multidisciplinary team members from nursing, social work, psychology, and/or pharmacy can greatly aid a primary care physician in the diagnosis and management of patients with cognitive concerns. A smaller subset of patients will need more in-depth neuropsychological assessment and clinical evaluation from a dementia specialist. The NIA-AA clinical diagnostic criteria for dementia, AD, and MCI (see Tables 59-1 and 59-2) were designed to be used across all clinical settings, including primary care, specialty clinics, and long-term care. The clinical diagnoses of MCI and dementia are primarily ascertained through completion of a focused interview with the patient and an informant who knows the patient well, a thorough review of the patient’s medical history and medication use, a comprehensive physical examination, a formal assessment of cognitive function, basic laboratory tests, and neuroimaging (Table 59-3). While the differential diagnosis for AD is extensive (Table 59- 4), a systematic approach to dementia diagnosis can help primary care clinicians identify common confounding medical and psychiatric conditions and medications that can adversely affect cognition. In addition, a structured evaluation may facilitate accurate diagnosis of the most common causes of dementia—AD and AD mixed with vascular dementia as well as predementia syndromes such as MCI. Integrating various established diagnostic criteria, Figure 59-3 shows a primary care diagnostic algorithm developed to guide clinicians in their assessment of patients with cognitive complaints.
TABLE 59-2 ■ NIA-AA CORE CLINICAL DIAGNOSTIC CRITERIA FOR MILD COGNITIVE IMPAIRMENT
MILD COGNITlיVEIMPAIRMENT4
. .
.
cliגlician •obs rving th pam nt11otes.a co]בc r11r.gardiI1g a clבange in co,g1li.tio11in c,ompa:r·ison to tlוe patieתfs previous 1 vel.
Tl1 pa:t·entי an iגוfo1·111ant i,vho know:s tb:pati 11.t w ll, or a
Ther is evidence of low. r perfo11nance in on,eor 111 cognitiv do,mains (m moנ•y;,e e,cuti ,unc '011נ .att nt�on) iangua,ge) and/o,r visuos·atial ·· 1 • _ hat js·•r at r tha11 woיuld b
expect ducational b.a•kground
The patient n1ail1tta"ns pr s,erv, d i.nd p nd .nc i11 functio;nal
abiliti ) although th·y may take נnor·e שn,e"b iess fficient)and ma at performin,g uch a,ctiviti · t"han in.th · pas .
T i\a for ,d.em,11tia..
•D (- "וווi1�i תctוrocognגtive disoויder" ו::'t ria al�o t.גt t!ןat thes-e defו'°it ·do ווot oc-ciוr
exclז1-5ively in tfןtי coווtext of a d 1וrוuורר Oi' סt.lןer merוtal di.sord r (eg, 1דuוjor depiייe iv-e di ordeו:, sclוiו..oplוrenia). ]dent·\fוcatוסn a11d exciu ion o,f tl1er neurolog,i.c, p yehia,tric, a1וd nר.dical clis roer. iנs iוnptied in t]ןe ' teK'�,of tlרe JA.-AA NlO di�gורosric c.riteri,a.
Mod!fied •vith p tiווissioiג fro11י1 Albt!r.t נW , D·eKosky S1; Dit:ksO'iו D, et tו!. Tl1e dir�nosis סf זזוild c,ogתiliא iחJpairוו1c11I di1e !ס Alzhcimer's di�oo$e: זr:�om1ז1cr1,Jaiiטיrrsfro11ו tlre 1alio11al /11stilute oז1 Agirגg-Alzlieinוer's Associatiotנ workgroi�ps oו1 diagו.1oslicg1גideli11es
jor A.lz/וe.iחיוers discase, A!zhciוז1e1·s Dcrחcnt. 2011;7(3):270-279.
FIGURE 59-3. Algorithm for the clinical diagnosis of Alzheimer disease.
Identification of a cognitive concern is the first step in the evaluation.
While early cognitive changes in some patients may be readily identified by the individuals themselves, their families, and/or their clinicians, such symptoms may not be as apparent in other patients due to a variety of factors, including poor insight, attribution of such changes to normal aging, cultural views of dementia, or lack of corroborative history from others. Whether all older adults should undergo routine screening for dementia remains controversial. The US Preventive Services Task Force recommends against routine screening for dementia in asymptomatic older adults based on insufficient evidence that such widespread screening impacts individual or societal outcomes. However, the US Medicare Annual Wellness Visit requires that clinicians assess cognitive function by “direct observation,” although no cognitive screening tool is endorsed. In an effort to operationalize the Medicare Annual Wellness Visit requirements, the Alzheimer’s Association recommends using self-reported memory concerns, clinician observations, or concerns from a person who knows the patient well to trigger a formal memory assessment. Screening questions such as “Does your memory bother you?” or “Do you think your memory is worse than others of your age?” also may be used to determine which older patients
need a formal evaluation of cognitive performance. Identifying memory concerns through self-report or screening questions may reduce the number of unnecessary formal cognitive screening tests administered to asymptomatic adults at low risk for dementia. However, individuals without a close informant may need structured cognitive tests to identify memory concerns.
When a cognitive concern is identified, a separate clinic visit should be arranged to investigate the underlying cause (see Figure 59-3 and Table 59- 3).
TABLE 59-3 ■ EVALUATION OF THE PATIENT WITH COGNITIVE CONCERNS
H�l:<>ח o ogniti\1: clוiirו
Pr וז�ו:ו.ry .י;:iיmptoDר(!;) aL ons�t (memory lח�&. נ,·זןgaagelspeiltng eJr(IJ.:,, lnוp,;ו l"f)d rו!!ת.�oווlח!!, dLffו.i;:11!1וe.i;m flן11t1 ו. c1.k:1דבg. peu.Qו:1�!!Lן•
ו.:h;)1וgc�. ,;tc)
Diנ,le of Oוו�ct .ו11d li.mו:יoou.r�<:cf co&ןרitiVI:dt...:liווc: (gז-,,du;גll,, rrטgr ssivt, �ו:ין?Wi,e fluclג.וali1וg.;גbruןכl rי1pitUy pro&n,'1Sii\le �•! } iבוריi! ד,,,1ן�ther oכי no! li \י;,ה,s a=d,:ז.יted \Yוth d�llrlum
�t ani p:re:,ei1t גוntli@ 1-t hi�tl!יr lcVl)I ז;ג lb (j1tcludi1נg t(1iSI<$ 11.t v-י<י.rk, l\oobi , d ily OOll.$'el"ו.O!d <J.i� iוגcludiri.g instזuוונcntill ac,t\,,iוoos or,d:וil)'!ivirוg ן rADיl..s]) •
fety ,;oi}ccms (me..ti,;'lןliסJן ו.ננ1 geוווו::nו, dr,ivi1וg, Jdtch •n SQfccy, \Ji • Qffi rc.-.mו�or heavy e-quipnרcווl, \\!"Ql\de·ti'l'g, i'in.�ncim $CGוןןs,� c) Other assoc 11t1:d syזזןpפQז1is (di:�ss.loנר, lוז:וrr:ז.or, ireque:nו raווs, vlisuiנil hגJlu.i:lת.:tt0י11,s, s\roke .דנר.d/סr וrurn:leחl hs�neוווl.: aitac.k sy11וp tci:1וו , a�ia, 11�l1ז:ז:vy lחt;oחll·11,;rו,,;,;, יי§�taL'lוכח,j?FW�lit;y .:;hw1זgו;!!, �,;}
ן:ןa1,ו 111,;,di(al ;111d,p� ctווiatri11; 1 ifitory
Viו�laני ויt�k li cl.on (!11cl11ding hס'1,,r \V�IJ lhcy 1נ�vcblX'ז:I conlro.l!ו:יd uv.:זי IJדnc)
Sira' גוו.dtoוr r;ן1רi>.ieנr•\ li.:נiוi ;גןt,a,; (;גsse "'he11וt c r�:.דו • eveוןl w� oc:iaוed I itn ooikt of oo.gni.ti\"t ;ifl'lו�ב1oנ11il)
Al·rf I flwlll tinח, 1,eamtid Brfcty d[r!;:ca��. p�l�וt fw;i11וctו ו::בvak;apif/m ptlwr ז�1י;f:s,;:1IQ1ךif1ז:>1·�•ti9'1..c Ctirט:ו:i,a_ty�rt,:,וץ lbypMs su11,יer} ( ��,\.,.1וttlLJ.'r-:!'LJ!!'St:rז ע;•;ן�<1יS!ioci�t1:1d נ·tlו�ו:tof cסgננit.ו'ז•t �}•זnpkכזJJ�
Otli r וחnJסr<ג'lfLtral ווe vיo-רג.s yst� ( ; ) �זtt (ti'.\umiit; brain inj11ry wiו/נ jc;_נss o co11sciouso.e , anw::k b,ai11 iווjLגry, posrop �-· ti"'e to.gnוiliי-,'di:ysfש\�lion, et נ
l•lcיariונg and/,or vi&io1ו ]u.5-S
Oiגsוruct,ivc:s:lt.-ep iנpווסiו (jדר.�ludi1וi; lגu1,1 well 11 i$ ו.r1."ii.tnl wilh co�!i1lli0וiנ> posil•W�iחmy p�eי sure [CPA.PI i1rtנllז�r mונdi1litie ) Aloo.llol o.r tt �11'tm:a11oe .ד.bשe
D�pת?$SE001, aח);iety; po5tוr;iuma1ic &!rns!i ,di�ordeוי, or o�lדer ps)יchו::וl:ו;-ic il]ונe55 P.ר.זk:111tיטגו d� • pיarki,ו�OJ'י'i�m,an'ןj'יO[וup)\i lנtcirnl �clcM�,Qt יFזוווltip!ונsd וvsi
l!if.L'lrf< dl,י;,1Jrd1:r
lii�l.uג:J o1'ונtדatigוו�חty 'i'י'illג �r ,vi 001,1;קrw-ז ,r«iimeונ\ \vil[ן chemQI נmזן�
l,'f ripוlon and Jtom:p1יtנ 4)1:iQn 1וwdt t\01\S, KI :;;u.ppletn111זזז (�p�ci 11}' ho,'e11.Lh <11 tic!wllttergicoc�גtmg,�ide eff�i;;ןs)
ז1nו!Jt�q[ nn� 1נf oogn ו :ve•"-ymp�nru�ז,r�זh.pןeili,,;iוtn11f�1•pp 111mז lחlוl;,.זio:n cוr \iru;e dו�11�
Medic�tian 1'1!\iew
�o.:i,111 !ili�h)זy
Fanניi1y, r.l�mk�,�nd othcויliDCiו:וl �LLPFOrt
U·�o ,;,כחו111uז ity ow:<כes (1.t1.:]11diiרg h<כm :al<m, eוו.i.or Mו tr.'יi'$, rn� rתיices, tc}
P,d11c.י1:t1onבl !ו l�lחויy (11ני1;lt1d:jqg {�rחו::ו.1 ye.ו�(lf eםtדcatlon Bו;ןdfQJ tr;!Glרnic�I ti-aln יי!!, aזןy hח.tו!!ויfllptlחn ו1[1 �,c,lרonliמ�nr r-:נ.=tו«l gr.1de�, 1111, $.\1$,jג� or&· p10�«! (1?,,'lו'Iniזig di:1Mililiיi$or יtttוןוio,ו '1cfi,ctt ili�tr,etיt,)
\גfurk ltistory (i11d11di11g" t :pL-� of resp1.111sibililיic:1ris.so וal.ed i .i.llו oו:-.iכונ.p.רlio:ru}
Military 11.is.tory (inclד.1d:(ng :\posurc to comb.נt {1 bl11SI iווjur:'i.es}
Hobbו� 11\cl Qthcr diIJ!y rוci(vltl��
1-ii lוגrץ o •AlJ or otlזer <1001...-Itti:גs tinrl'tl'{titו.g ge oro.met of$yוונp1Jכm iוו.:affooted iil� נ11e.tnlי�r )
11iskנז:,י af ollר,cr חcurodcbיcי11cr,ןti11t disoיrdcזיs,slrok!i. p�)'thiBlric illווcsseli, ,c-lc
S1:1l:rttג11ce u� lוi o.ry (iוזcJuי:Liתg.-.irוy prloc lוi«o.rf ufנu."'-� :akטlוol�) Fiiוn il hi11loוך
Ph�icA.i eminוirו t1!&11
Gen.�rnl .וpproranc� J(;:,itcntion, ceוnp.relוו�E�sion, ooopיLיr.ו.tiM, pcr\5011.ה,l tזץgim1e iintl grornרiiוז�ooci.בl הpproקriut1eneי:ss. p�ץc1tטmu!.ur 1sJov.•ing, \1;ord-flndוiqg d ftן,1,1ll1e-:s)
Me11t.icl :itגEus (i:w:J, vtQנ;,;גlttזude, mטQd, בffe-ct, iגuiglנt, judgmeננt, וlנ.טug.ht זon�ent, וhought-!;}n.רt'i.!�5, spe«h, lang11Jגge) CrtוnW ח{:f'V (fil.d.111sy-mmctry. 'i'i$ual 11cuit¼ pLlf)i1] ry�S;jנ(lנר.re.$,�Jד; MCM:זז\,1,.ו\tנ, Vi0S\JBI fiiג/d.ך, li�ring liirr:זp.וi1'me.חt) lיrtowr fשד,ct 0יn .�.דנd int,cgntlיט:rנ (.s��דigl.!1,, tone, rog:w:lie�llng, ilmul:גוljon o 1,0,tor ג�mת:נ. to Li:cנl for �n.-::l.!.j
, �nsnry rז1ןן,;;li1בזג יחd iחtו::_grת,tןו.וזj(� alitוח \ו:11ighז וn1•�b. j,isתli'/i<:;t\i(ln hy k>1,1ch of jll1 npje(t pנ�.:\:cd in tll,ו,c�ון,;1,nr. וןuוןרb!C'ז Writ
ז�n. !J'li ll�eיli,11�, ability w p ·ן1Q1:' eי "דון11l ת �·o\FBbilגutודil � ctii�• t וזוטli)
<::oord111.11�ioוו {�pid 11�rnaliiחg t1t<.w�רtcttU,.fiוו� -to.rוiכf� t�>sזi11g, "11��1- 1111 t��iננA) D�11ו,ו wונdoנi�tנ��
G11it
Within a dedicated primary care clinic visit, an optimal cognitive assessment includes gathering information not only from the patient’s perspective, but also independently in a separate interview from an informant who knows the patient well. Depending on available time and resources, an independent informant interview may be accomplished through utilizing a variety of health care team members, such as social workers, medical assistants, nurses, or psychologists to conduct a brief structured informant interview or a full detailed assessment. Important historical elements include establishing when the cognitive symptoms began and the very first symptoms noted (such as problems with memory, language, executive function, apraxia, or personality changes). A careful delineation of the time course of progression will narrow the differential diagnosis and will help identify whether there are multiple contributing factors or one underlying process.
Frequently, an inciting event that disrupts coping skills, such as a hospitalization or the death of a spouse, will draw the attention of family members to a patient’s memory problems. The family may give a history of an acute onset of memory impairment following the inciting event, but careful questioning may identify cognitive problems preceding that time period and point to a gradually progressive course.
A key component to the interview is establishing the patient’s baseline cognitive and functional performance, taking into account past educational opportunities, estimated baseline intellectual function, occupational history, and prior established skills and abilities. Understanding the patient’s
baseline function will put neuropsychological test results into context in order to prevent over- or underdiagnosing dementia in patients who present with cognitive concerns. Changes in the person’s ability to carry out tasks related to their occupation, hobbies, household management, and other volunteer activities should then be ascertained.
There are common reversible causes of cognitive dysfunction that next should be addressed. One of the first steps should be a careful review of prescription and nonprescription medications. Drugs with known anticholinergic properties (such as antihistamines, tricyclic antidepressants, bladder antispasmodic agents, etc) or sedating side effects (such as high- dose gabapentin, other antiepileptic medications, narcotic analgesics, benzodiazepines, sleeping aids, etc) should be carefully reviewed to see if the benefit of the offending medication outweighs the adverse cognitive effects. Patients should be included in shared decision-making with any medication adjustments as the value placed on various symptoms is likely to differ between individuals.
Clinicians should carefully evaluate their older patients for depression, anxiety, or other mood disorders that can affect cognitive performance.
Depression may be a prodromal syndrome prior to dementia onset, but also commonly co-occurs with this syndrome. Pointed questions assessing for changes in sleep duration and/or quality, interest in activities, feelings of guilt, loss of energy, impaired concentration, changes in appetite, psychomotor slowing, and suicidal thoughts should be assessed. A brief screening tool such as the Geriatric Depression Scale (GDS) can be administered by a health care team member or self-administered while the patient is waiting for the clinician. Older patients with depression frequently complain of problems with poor concentration and forgetfulness and may perform poorly on tests of attention, speed of processing, and memory. In such patients, it is important to differentiate a loss of interest related to depression from a lack of initiative due to a neurodegenerative disorder.
Treating depression and anxiety may lead to improvements in cognitive performance as well as mood.
Hearing loss may mimic cognitive dysfunction as patients who cannot hear well may not be able to properly encode new information from conversations or other auditory-received information. Questions on hearing loss symptoms and use and fit of any prescribed hearing aids can alert the provider as to whether hearing loss is contributing to cognitive symptoms or
if further hearing evaluation is needed (see Chapter 34 for approach to screening for hearing loss).
A careful assessment of alcohol use should be completed in all patients, especially if cognitive performance varies widely from visit to visit or if the patient lives alone. Risk for obstructive sleep apnea should be assessed with several screening questions assessing the patient’s snoring, witnessed apneic episodes, excessive daytime sleepiness, or nonrestorative sleep. In patients with diagnosed sleep apnea, their ability to effectively and regularly use their continuous positive airway pressure (CPAP) device should be assessed and any difficulties should be reported to the sleep medicine and/or respiratory therapy team to seek out other mask options for better fit and tolerance.
Obstructive sleep apnea with its related hypoxia can cause profound effects on cognition.
Vascular disease may contribute to cognitive impairment through a variety of mechanisms. In addition to stroke causing acute cognitive decline, chronic low cerebral blood flow leading to subclinical hypoperfusion may also contribute to cognitive impairment and AD. Thus, a careful assessment of vascular risk factors should be completed to make sure they are well treated. Carotid bruits or a history of sudden cognitive changes should prompt work-up for cerebrovascular disease with neuroimaging (computed tomography [CT] or preferably magnetic resonance imaging [MRI]) and either carotid ultrasound or magnetic resonance angiogram (MRA).
Delirium is associated with an acute or subacute onset of fluctuating cognitive dysfunction and may be caused by a wide variety of medical conditions and medications. In patients with delirium, a careful history frequently can tease out the temporal relationship between the onset of potentially reversible cognitive symptoms and contributing underlying medical problems or medications. Patients who have had a significant medial illness may exhibit signs of delirium for weeks to months following the inciting illness. Care should be made to avoid making a diagnosis of dementia in the presence of a resolving delirium. Since dementia is a risk factor for delirium, however, the presence of a delirium may suggest an underlying neurodegenerative disorder.
Additional information on safety should be obtained, including inquiries on medication management, driving, kitchen safety, use of firearms or heavy equipment or power tools, wandering, and susceptibility to financial scams. A review of systems should include questions on depression, tremors, falls,
visual hallucinations, symptoms of stroke or transient ischemic attack, ataxia, dysphagia, urinary incontinence, waxing and waning level of consciousness, agitation, and personality changes.
The patient’s past medical history should be reviewed for medical and psychiatric conditions affecting cognition, including cardiovascular and cerebrovascular disease and associated risk factors, surgical procedures including coronary artery bypass surgery, significant hearing loss, depression, Parkinson disease, TBI, seizures, and/or heavy alcohol use. A thorough medication review should be conducted to assess all prescription and nonprescription medications and the association of any medication initiation and/or dose adjustment with changes in cognitive symptoms.
Patients should be encouraged to bring in all pill bottles to the clinic visit. The social history should assess the patient’s education and occupational baseline, their social support network, and their use of community resources. An accurate assessment of prior or current alcohol or illicit drug use and a sexual history with special attention to sexually transmitted disease (notably syphilis and HIV) risk factors are critical to a correct diagnosis. An assessment of family history of dementia should include age of onset and time course of any symptoms of family members with memory loss.
The physical examination should include assessment of general appearance and a mental status examination (see Table 59-3). Careful observation upon interviewing a patient can provide rich information as to their ability to care for themselves, their organizational ability, their ability to provide detail within their conversation, and their comprehension of posed questions and the appropriateness of their response. Ears should be checked for any cerumen accumulation and/or hearing loss. A neurologic examination should screen for focal deficits, gaze palsies, increased muscle tone, cogwheeling, tremors, and ataxia. A detailed review of a comprehensive mental status and neurologic examination in older adults is described in Chapter 9. Cardiac arrhythmias, carotid bruits, or abdominal or femoral bruits may suggest a vascular contribution. The remainder of the physical examination should focus on ascertaining any major medical conditions that could have significant cognitive effects, such as hypoxia or significant active infection.
While there is no consensus as to which is the best cognitive screening tool, there are a variety of cognitive screening tests that have been validated in a primary care setting. Clinicians should identify several with which they
are comfortable so that they can be used consistently over time with their patient population. The Mini-Mental State Examination (MMSE), the Montreal Cognitive Assessment (MoCA), and the Saint Louis University Mental Status Examination (SLUMS) have been widely used in primary care settings. The Alzheimer’s Association recommends use of the General Practitioner Assessment of Cognition (GPCOG), the Mini-Cog, or the Memory Impairment Screen (MIS) for cognitive screening related to the Medicare Annual Wellness Visit, as these tests take less than 5 minutes to administer, have good psychometric properties, and can be administered by a variety of health care team members. Informant assessment of changes in patient performance may include the GPCOG informant questionnaire, the Eight-Item Interview to Differentiate Aging and Dementia (AD8), or the Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE) (see Table 59-3). If time and resources allow, additional interview time with an informant may identify specific areas of safety concerns and help tailor the management plan.
In adults with high baseline cognitive functions, these screening tests may be normal in the presence of obvious functional impairment necessitating referral to a neuropsychologist for more detailed cognitive testing. In individuals with lower educational levels or learning disabilities, cognitive screening tests may suggest impairment, but the history may not suggest any changes in functional status. Thus, it is critical to use age- and education- adjusted norms, and integrate historical information on baseline function to decide if further neuropsychological testing is warranted or if abnormal testing actually reflects the patient’s baseline cognitive performance.
Laboratory data can assist in identifying factors that may be contributing to cognitive decline. Rarely do these factors alone account for the overall cognitive changes that lead to the presentation of a patient with significant memory loss. Nevertheless, treating such factors may improve cognitive symptoms in patients with pronounced laboratory abnormalities, numerous comorbid illnesses, or an underlying neurodegenerative process.
Recommended laboratory tests include vitamin B12, folate, thyroid- stimulating hormone (TSH), electrolytes, complete blood count, liver enzymes, and 25-OH vitamin D. If symptoms are atypical or if there are
specific risk factors, then an HIV test or serologic test for syphilis may be
performed. In patients with assumed heavy alcohol use, thiamine (vitamin B1) levels should be checked. In some European countries, routine
assessment of cerebrospinal fluid (CSF) for β-amyloid and tau levels is done as part of the clinical evaluation. While CSF β-amyloid and tau levels may increase diagnostic accuracy of MCI and dementia due to AD, in general they are not recommended for widespread clinical practice as in most cases they do not change a patient’s management plan. CSF collection may be used in memory specialty clinics, though, to differentiate between different dementias, including Creutzfeldt–Jakob disease, normal pressure hydrocephalus (NPH), or other less-common causes of neurodegeneration (see Table 59-4). Genetic testing for APOE ε4 genotype is not recommended in routine clinical practice. Testing for PSEN1, PSEN2, or APP genes should be reserved for specialists evaluating cases in which there is a suspicion for familial AD.
TABLE 59-4 ■ DIFFERENTIAL DIAGNOSIS FOR ALZHEIMER DISEASE
In patients with documented cognitive impairment, it is recommended that either a CT or MRI scan of the brain be obtained. If neuroimaging was
obtained for another indication prior to the onset of cognitive symptoms, in most cases the patient should be reimaged. Typical findings for AD on neuroimaging can range from a fairly normal scan to focal or diffuse cerebral atrophy. A CT of the head without contrast is usually sufficient to screen for significant cerebrovascular disease, brain tumors, subdural hematoma, or NPH. MRI can provide more information if lacunar infarcts are suspected.
MRA may be helpful in identifying significant stenosis that could cause hypoperfusion. In persons with suspected seizure disorder or Creutzfeldt– Jakob disease, an electroencephalogram (EEG) may be considered. Use of fluorodeoxyglucose (FDG) positron emission tomography (PET) and amyloid PET imaging to differentiate FTD from AD should be reserved for specialty clinic use. Tau PET imaging is a novel research tool that is not yet approved for clinical practice.
Formulating a Diagnosis
Once a cognitive concern is recognized and delirium is ruled out, the clinician should identify and document any impaired cognitive domains (such as memory, executive function, language, or visuospatial skills) on cognitive testing and any functional loss in the individual’s daily activities. Each potentially reversible cause of cognitive impairment should be outlined (ie, medication side effects, alcohol, sleep apnea, depression, or other medical comorbidities) and a plan to address these conditions should be developed.
Objective cognitive impairment in the context of a supportive clinical history plus a decline in the individual’s daily functional abilities are key elements necessary to differentiate normal cognitive aging and subjective cognitive decline (SCD) from MCI and dementia. With normal aging, individuals may experience a decline in mental processing speed and may have more difficulty learning new material, but these cognitive changes should not affect their usual function within their daily activities. For example, a healthy older adult may have more difficulty recalling an acquaintance’s name or learning a new computer program, but their cognitive testing should be normal and daily functional activities should remain intact. SCD is a term used primarily in research settings to broadly describe symptoms within a pre-MCI stage of neurodegeneration. SCD is currently defined as a self-identified persistent decline in cognitive capacity compared with the individual’s previous normal status in a person who still performs in the normal range on standardized cognitive tests. An example would be a business manager with
normal performance on cognitive testing, who has noticed a subjective decline in her efficiency in managing numerous projects simultaneously despite maintaining a similar work load for many years. It is not yet known what percentage of patients presenting with SCD progress on to MCI and eventually AD; however, there is converging evidence that risk for progression to MCI and dementia increases in persons with SCD. Identification of patients with SCD allows clinicians to complete a thorough evaluation for other medical, psychological, and medication factors that could contribute to cognitive decline. Patients with SCD should be screened for cognitive dysfunction annually to evaluate for objective evidence of a decline in cognitive performance.
Once a person with SCD develops deficits in at least one cognitive domain, they may meet criteria for MCI (see Table 59-2), a symptomatic predementia syndrome noted in up to 15% to 20% of older adults.
Individuals with MCI may present with cognitive complaints and describe a variety of methods they use to compensate for these cognitive changes, such as increasing use of lists, calendars, alarms, and other reminders. They maintain their level of function, but are less efficient in doing so. For example, a cabinetmaker who demonstrates impairment in executive function on testing may complain that in order to complete a cabinet work order with his same level of quality workmanship, it now takes him 2 to 3 weeks, whereas a few years ago he could complete such an order in 1 week. Once an individual’s cognitive impairment progresses to the point that they can no longer maintain their baseline level of function, they may meet criteria for dementia. In the previous example, as the cabinetmaker’s cognition declines he may no longer be able complete a cabinet order at all or may finish it with poorer-quality workmanship. At that point he may have progressed to a dementia.
Approximately 12% to 15% of persons with MCI will progress each year to AD or other forms of dementia. MCI patients who have impairment in memory performance (single-domain amnestic MCI) or in memory plus another cognitive area (multidomain amnestic MCI) are more likely to progress to AD. Older individuals with nonamnestic MCI may be more likely to progress to other forms of dementia, such as FTD, dementia with Lewy bodies, or vascular dementia. Once a diagnosis of dementia is suspected, the clinician must differentiate between various causes of dementia. AD is the most common form of dementia in the United States, accounting for 50% to
90% of all dementia cases. Dementia with Lewy bodies, vascular dementia, and FTD are other common forms of dementia (Table 59-5). Details of the clinical and pathologic features of these dementias are covered in Chapter
63. Differentiating AD from other causes of memory loss can help clinicians choose effective therapies, anticipate behavior changes and other potential complications, and provide patients and caregivers information on prognosis.
TABLE 59-5 ■ CLINICAL FEATURES OF COMMON DEMENTIAS
If a patient does not meet the criteria for AD yet clinical suspicion remains, the clinician may consider obtaining more detailed neuropsychological testing or repeating screening cognitive testing in 6 to 12 months to clarify the diagnosis as the symptoms become more apparent.
Persons with suspected MCI should be reassessed on an annual basis to evaluate for progression to dementia. If the symptoms or course of the disease are atypical for AD, the level of functional decline is out of proportion to neuropsychological testing results, or if there are significant
behavioral issues that need to be addressed, then referral to a geriatrician, neurologist, or psychiatrist with expertise in dementia is recommended.
Future Diagnostic Tools
Novel biomarkers are continually being investigated for use in the diagnosis of AD and other types of dementia, as well as in identifying predementia syndromes. Many of these tools are still used chiefly in research settings, but are being studied to evaluate their potential role in clinical practice. Current investigations are focusing on specific neuroimaging modalities and biomarkers (including blood and CSF) with strong relationships to clinically relevant outcomes that could be used not only for diagnosis of dementia, but also for identifying asymptomatic persons at risk for cognitive decline.
Neuroimaging modalities have shown great promise in documenting not only the late effects of neuronal damage in AD (regional and global cerebral atrophy), but also in identifying preclinical pathology (such as in vivo amyloid and tau imaging on PET) and the functional consequences of such pathology (such as changes in activation patterns on functional MRI or glucose uptake on FDG-PET). CSF levels of Aβ and tau have been shown to predict risk for progression to AD in older adults and persons with MCI. With the recent advances in the safety and acceptability of lumbar punctures, CSF markers may eventually find their way into the widespread clinical diagnostic work-up of preclinical AD. Identification of reliable blood biomarkers is a rapidly expanding field with significant clinical applications. Future research is focusing on how novel biomarkers may be used in combination with cognitive tests to identify which individuals are at greatest risk for AD, who would benefit most from preventive therapies, and how effective these therapies are in modifying the underlying disease process in asymptomatic and symptomatic individuals.
Updated NIA-AA Research Framework
As noted earlier, the NIA-AA criteria for AD diagnosis were created in 2011. In April 2018, the NIA-AA released a report proposing a new research framework for studying AD that served to update the 2011 guidelines with subsequent scientific progress. The most important, paradigm-shifting change in the 2018 report was redefining AD by its biological changes in the brain rather than the clinical phenotype. By moving away from the prior definition based on clinical-pathological presentation to
biological characterization, the NIA-AA framework aimed to create a common language in research studies, allow for more aligned comparison of research findings, and facilitate future clinical trials.
The current clinical framework for diagnosing AD is based on symptoms reported by the patient and corroborated by a collateral historian, together with an objective assessment of cognitive decline. The level of confidence in this diagnosis ranges from possible to probable, depending on the presence of typical symptoms, as well as the absence of alternative causes of cognitive decline. A biological diagnosis of AD, which is considered the only definitive diagnosis, presently relies on autopsy findings of amyloid plaques and tau neurofibrillary tangles. Numerous studies have found discordant findings between clinical diagnoses and neuropathological outcomes. It is reported that 10% to 30% of clinically diagnosed persons with AD dementia do not display the neuropathological hallmarks of amyloid plaques and tau neurofibrillary tangles on autopsy. Differentiating the clinical syndrome, composed of symptoms not necessarily specific to AD, from the biological changes seen on neuropathology would allow discovery of novel mechanisms underlying AD neurobiology. Furthermore, enrollment of participants with biologically confirmed AD diagnosis will be critical in evaluating efficacy of disease-modifying therapies as they become available in the future.
The 2018 updated research framework proposed a biomarker classification system called AT(N). The framework categorizes individuals based on the presence or absence of the following core AD pathological features: aggregated amyloid beta protein (A), aggregated tau protein (T), and neurodegeneration or neuronal injury (N). The biomarkers for amyloid and tau were selected for their high specificity to AD-related changes found in autopsy studies. In the AT(N) framework, AD is defined by the presence of both amyloid (A) and tau (T). Since neurodegeneration is not a feature used in the neuropathological diagnosis of AD and is seen in other neurodegenerative disorders as well, it is in parentheses in the proposed research framework. However, its inclusion in the framework was necessary to reflect disease severity. Table 59-6 provides a list of validated biomarkers used to identify each component of the AT(N) framework.
TABLE 59-6 ■ CSF BIOMARKERS OF ALZHEIMER DISEASE
The AT(N) framework has accelerated discovery of novel CSF and more recently blood-based biomarkers of amyloid, tau, neurodegeneration, neuroinflammation, and other pathological changes seen in AD. Many of these biomarkers, especially the core AD biomarkers including Aβ-42, total tau (t-tau), and phosphorylated tau-181 (p-tau-181), can now be reliably measured in CSF and correlate well with PET brain imaging and neuropathological findings. Although validation of various AD biomarkers in CSF, blood, and autopsy brain studies is ongoing, they are being actively used to enroll participants in treatment trials and clinical and translational studies. Based on data emerging from longitudinal cohort studies and randomized clinical trials, the reported accumulation of AD biomarkers over time points to a continuum of disease progression, as opposed to the older clinical conceptualization of distinct levels of disease staging.
There is converging evidence that amyloid deposition in the brain is the first neuropathological change seen in persons with AD. This conclusion is based on studies of people with early-onset AD due to an autosomal dominant mutation, those living with Down syndrome, and findings from transgenic animal models of AD. However, the presence of amyloid alone is
viewed as only an early stage within the Alzheimer continuum and is necessary, but not sufficient, for the biological diagnosis of AD. While the amyloid cascade hypothesis suggests amyloid accumulation causes changes leading to tau accumulation and eventually neurodegeneration, it is also accepted that, unlike tau, amyloid is not strongly linked to cognitive function. Amyloid may have downstream effects on tau and neurodegeneration, but the AT(N) framework does not assume an order of causality.
One of the most notable contributions of the AT(N) framework is to effectively screen and identify participants for enrollment in clinical trials based on their biomarker profile rather than nonspecific clinical presentations. The framework also provides validated and uniform biological outcome measures to assess the efficacy of disease modifying therapies, and determine dose-response relationships. Given AD is a chronic and slowly progressive condition, trials to find effective treatments are challenged with finding reliable surrogate outcomes that could change quickly, rather than wait for alterations in clinical or behavioral phenotype that would require longer and costly trials. The AT(N) framework would enrich treatment trials with biologically confirmed AD participants at higher risk of decline, as well as serve as a surrogate end-point in disease modifying therapy trials. Additionally, it could serve as a marker of treatment response.
The AT(N) framework is not meant to be comprehensive and exclusive, but rather adaptive to newer scientific discoveries. A new evolution in the framework is ATX(N), with “X” representing novel candidate biomarkers to help expand and explain underlying mechanisms in AD. Examples of potential mechanisms include neuroinflammation, synaptic dysfunction, microvascular changes, mitochondrial oxidative damage, glial activation, neurochemical deficits, and BBB dysfunction. The ATX(N) framework would help expand the scientific understanding of AD as well as investigate the heterogeneity of the disease.
AD Biomarkers
Cerebrospinal fluid AD biomarkers Amyloid and tau biomarkers are central in identifying AD pathology and will help explore disease heterogeneity. They will likely play critical roles in AD, including early diagnosis, disease progression, screening, risk prediction, target engagement, treatment monitoring, and validation of novel biomarkers. The core AD biomarkers,
namely amyloid Aβ-42, t-tau, and p-tau181, have been validated through multiple CSF, PET brain imaging, and neuropathological studies. Given their proven reliability, amyloid and tau biomarkers will serve as a framework for identification of novel biomarkers to address contributions from potential coexisting pathologies, such as vascular insults, Lewy body dementia, Parkinson disease, and TDP (TAR DNA-binding protein)-43 that likely contribute to clinical symptoms and cognitive decline.
In the past decade, new CSF biomarkers have been identified representing multiple mechanisms active in AD. These mechanisms include glial activation, neuroinflammation, synaptic degeneration, and neuronal/axonal death. Importantly, many AD biomarkers can now be measured in CSF and several in blood as well. Although the validation of CSF biomarkers is currently ongoing, once approved, they will have major clinical applications in the diagnosis, progression, and treatment of AD and related disorders. Table 59-6 summarizes CSF biomarkers representing various molecular pathways active in AD. While some of the CSF biomarkers, such as t-tau, NfL, YKL-40, and interleukins, may not be specific to AD, they elucidate disease progression and are related to symptoms. Measures of neurodegeneration may be particularly important, given clinical features of AD tend to track closely with synaptic dysfunction, and eventually neuronal death. Synaptic loss is an early pathological change in AD and closely associated with cognitive impairment.
Neuroimaging AD biomarkers The multimodal neuroimaging AD biomarkers, including CAT, MRI, amyloid PET, and tau PET brain scans, have been examined over years and validated as effective measures to help in the diagnosis and progression of AD, and exclusion of other treatable causes of dementia, such as stroke, tumor, or NPH. Recent advances in neuroimaging include novel PET radiotracers to image neuroinflammation and translocator protein (TSPO) PET and synaptic vesicle protein 2A (SV2A) PET, respectively, to study synaptic dysfunction and loss. Clinically, [18F] FDG PET is available to visualize synaptic dysfunction, neuronal cell loss, and metabolic dysfunction to help differentiate AD from non-AD dementias such as FTD and Lewy body disease. Additionally, amyloid PET and tau PET are available in research settings, but not yet approved for clinical use or reimbursement. Once approved, amyloid and tau PET scans will become important for biological diagnosis of AD.
Blood-base d AD biomarkers Although highly informative, the utility of CSF and PET biomarkers is limited by their cost, logistical matters and practical barriers related to the lumbar puncture procedure. These limitations make CSF measures unlikely to become widely used in the clinical setting.
Consequently, the pursuit of blood-based AD biomarkers has intensified. Several AD biomarkers can now be measured in blood through emerging analytical techniques. However, a major limitation in broad utility of these biomarkers is significant heterogeneity in results due to multiple factors, including variance in sample collection, storage, preanalytical processing, assays, and data analysis. Among the AD biomarkers that can be measured in plasma include Aβ42, Aβ40, total tau, p-tau181, p-tau217, p-tau231, neurofilament light (NfL), interleukins, and YKL-40. Highly sensitive mass spectrometry assays are used to measure plasma levels of Aβ42, Aβ40, and their ratios, while single molecule array (SIMOA) technology is used to assay NfL, interleukins, and p-tau and its analogues in plasma.
Phosphorylation of tau occurs at multiple sites and certain isoforms, such as threonine 231 (p-tau231) and threonine 217 (p-tau217) change early in AD pathobiology and have been shown to accurately identify amyloid positivity along the AD biomarker continuum and clinical spectrum.
Plasma neurofilament light (NfL), a promising biomarker of neurodegeneration, albeit not specific to AD, has been shown to increase in persons with cognitive impairment due to many neurodegenerative disorders, including AD, Parkinson disease, FTD, and cerebral vascular disease.
Elevated plasma NfL is a sign of early neuronal death and can increase during preclinical stages of AD. The field of plasma AD biomarkers is advancing rapidly and new biomarkers that change during early stages of AD will become invaluable for early diagnosis, progression from preclinical to symptomatic stages of AD, and treatment monitoring and response. However, these biomarkers are not yet ready for clinical applications, given that larger studies are necessary to validate, examine the relationship with clinical phenotype, and harmonize their measurement in plasma.
MANAGEMENT
Managing patients with AD involves presentation of the diagnosis, initiation of medical therapy, assessment and treatment of concomitant depression and/or behavioral concerns, identification of a social support network,
education of patients and caregivers, provision of caregiver support, and initiation of appropriate safety measures.
Presenting the Diagnosis
Presenting the diagnosis of AD to a patient is difficult, as it may generate significant emotional responses from the patient and their family and trigger fear of future demise. Frequently, patients and family members suspect the diagnosis before it is presented, but how they respond to the news depends on personal coping mechanisms, cultural influences, family dynamics, and their preconceived understanding of AD. Clinicians may help patients and families adjust to this diagnosis by using an empathetic, yet honest approach and by providing them with educational and support resources, including those provided by agencies such as the Alzheimer’s Association and the National Institute on Aging Alzheimer’s Disease Education and Referral (ADEAR) Center. In addition, the clinician should emphasize the goals of diagnosing AD in order to take steps to protect the patient’s memory, delay the progression of the disease, and maintain the person’s safety. It is widely recommended to tell both the patient and family the diagnosis using the term “Alzheimer disease,” thus, providing patients and families with a starting point for education. Encouraging both persons with the disorder and caregivers to utilize resources such as local support groups, community resources, and national Alzheimer organizations is an important part of the patient management plan.
Drug Therapy and Nonpharmacologic Therapy
Acetylcholinesterase inhibitors (AChEIs) are the mainstay of therapy for AD. AChEIs increase the levels of the neurotransmitter acetylcholine in neuronal synapses, thereby enhancing cholinergic activity in the affected brain regions. Although 18% to 48% of persons may experience improvements in cognition after taking these medications, the majority of patients do not have any noticeable improvement, but instead experience a plateau or slowing of their rate of cognitive decline. While prior studies raised questions as to the cost- effectiveness of treating AD patients with AChEIs, newer studies integrating generic drug cost estimates have demonstrated that these drugs are cost- effective. Delaying the progression of cognitive decline may lead to improvements in quality of life, reduced caregiver burden, and decreased economic cost associated with long-term care. AChEIs have not been shown
to be effective in delaying progression from MCI to AD and, thus, are chiefly recommended for use in patients who already have a diagnosis of dementia.
Three FDA-approved AChEIs are actively prescribed in the United States: donepezil (Aricept), galantamine (Razadyne), and rivastigmine (Exelon) (Table 59-7). While all three of these compounds are available as generic medications, some specific long-acting formulations and solutions of these drugs are still under patent (see Table 59-7) and, thus, are not yet available in generic form. In general, the most common adverse effects associated with AChEI use are nausea, anorexia, and diarrhea. Bradycardia, atrioventricular (AV) nodal block, and syncope and unintentional weight loss are additional potentially serious adverse side effects that would trigger deprescribing. It is recommended that patients are started on a low dose of the medication with dose increases approximately every 2 months until a therapeutic dose is achieved (see Table 59-7). Gastrointestinal side effects may be alleviated by taking the medications with food. Sleep disturbances are also common and may improve with altering the dosing schedule.
TABLE 59-7 ■ FDA-APPROVED MEDICATIONS FOR THE TREATMENT OF ALZHEIMER DISEASEA
Memantine (Namenda) is an FDA-approved medication for use in moderate-to-severe AD. Memantine is an uncompetitive N-methyl-D- aspartate (NMDA) receptor antagonist. At high concentrations, memantine can inhibit mechanisms related to learning and memory, but at lower
concentrations, it can preserve or enhance memory in animal models of AD. Memantine can protect against the excitotoxic destruction of cholinergic neurons and may inhibit β-amyloid production. In persons with moderate-to- severe AD, memantine may slow the progression of cognitive decline. In addition, studies support that use of memantine was well tolerated and led to better outcomes on measures of cognition, activities of daily living, and behavior. Additional studies are needed before memantine can be recommended for earlier stages of AD. In patients with moderate-to-severe AD, combined treatment with a cholinesterase inhibitor and memantine has not been shown to be superior to treatment with either agent alone with regards to cognitive, functional, and behavioral outcomes. In patients who do not tolerate cholinesterase inhibitors due to gastrointestinal side effects or bradycardia, memantine may be used as first-line therapy. With use of either AChEIs or memantine, clinicians should educate families on what to expect with use of the medications, namely that they work to delay the progression of symptoms and not to significantly improve cognition. Consideration should be given to the modest expected benefit and monthly cost of both types of medications.
In June 2021, the US FDA approved aducanumab under accelerated approval based on the reduction in brain amyloid demonstrated in its clinical trials in patients with MCI and early stage AD. Aducanumab is a human monoclonal antibody targeting aggregated amyloid and amyloid oligomers. It is designed to enter the brain through the BBB, bind to amyloid plaques and oligomers, and stimulate microglia to clear the amyloid protein. In two clinical trials leading to the FDA approval, aducanumab was shown to significantly lower amyloid burden in the brain assessed by amyloid PET scans, in some study participants to near normal levels. In one of the two trials, treatment resulted in a 22% reduced rate of cognitive decline in the primary outcome measure (the Clinical Dementia Rating—Sum of Boxes) over an 18-month period; however, no statistically significant benefit was seen in the other study. Importantly, the aducanumab studies were initially halted based on futility analyses; however, reanalysis of datasets including newly collected data revealed statistically significant reduction in the rate of cognitive decline in only the high-dose group. Importantly, administration of aducanumab was associated with adverse effects on MRI imaging. These side effects called amyloid-related imaging abnormalities-effusion (ARIA-E) and -hemorrhage (ARIA-H) were seen in 34% to 36% of participants
receiving the high dose. Approximately 80% of those with ARIA did not have symptoms; however, those with symptoms experienced headache, dizziness, visual disturbances, and nausea.
Controversy has arisen over how the study data were analyzed, the true clinical benefit to an individual as measured by the study outcomes, and the social and political pressure on the FDA approval process. Despite these concerns, the FDA approval of aducanumab has ushered in a new research- clinical paradigm that, for the first time, utilized amyloid clearance as a surrogate measure in the approval of a disease-modifying therapy. It is projected that approval of aducanumab will lead to the discovery of newer analogues of antiamyloid monoclonal antibodies with more favorable adverse effect profiles and demonstration of clinical efficacy.
Additionally, there are no clear guidelines on several issues directly relevant to aducanumab treatment, including patient selection, inclusion/exclusion criteria, payment for amyloid PET scan and MRIs necessary to monitor presence and progression of ARIA, and duration of treatment.
In addition to targeting β-amyloid pathways, novel research is focusing on the effects of inhibitors of tau phosphorylation and aggregation and the stabilization of microtubules. Other potential therapeutic agents are directed toward inflammation and oxidation, insulin signaling, mitochondrial function, and nerve growth factor signaling.
Clinical trials have not conclusively shown that treating vascular risk factors delays the development or progression of AD. However, aggressive treatment of vascular risk factors in many patients with memory complaints, including those associated with AD, may be warranted. Vascular risk factor modification has known cardiovascular benefits that may lead to reduction in cerebrovascular disease, stroke, myocardial infarction, and coronary artery bypass grafting—factors strongly linked to cognitive decline. Trials are under way to clarify if vascular risk factor reduction and improved cerebral perfusion modify the course of AD. Until the completion of such trials, clinicians should follow established cardiovascular prevention guidelines for patients presenting with memory complaints, taking into account the patient’s comorbid illnesses, quality of life, treatment costs, and life expectancy.
Evidence also supports that encouraging AD patients to engage in nonpharmacologic interventions, including physical activity and exercise,
mentally stimulating activities, and social activities, may lead to cognitive benefits. Depending on an individual’s physical abilities, comorbid illness, social situation, and interests, clinicians should encourage AD patients and persons with cognitive impairment to seek out opportunities for exercise and activities that promote use of their intact areas of cognitive function. For example, an AD patient with prominent language deficits but intact visuospatial skills may find crosswords or word search puzzles very frustrating, but may enjoy playing checkers or painting birdhouses. Such activities may need to be adjusted over time to account for progressive cognitive changes.
Behavioral Management
Noncognitive neuropsychiatric symptoms of dementia include aggression, agitation, depression, anxiety, delusions, hallucinations, apathy, and disinhibition. Such behaviors may be more distressing to family and caregivers than the actual memory decline. Neuropsychiatric symptoms may be managed by nonpharmacologic as well as pharmacologic interventions. Nonpharmacologic therapies should in general be explored prior to using pharmacologic therapy, unless the person’s agitation threatens his or her safety or living situation. Chapter 60 includes detailed information on pharmacological and nonpharmacological management of behavioral symptoms of dementia.
Safety Management
Reviewing common safety concerns in persons with dementia may help identify significant risks and provide an opportunity for educating family members and caregivers on what areas to monitor closely and what safeguards to take to protect the person with AD. Some patients may require further evaluation to assess driving safety, which can be done through some occupational therapy departments, local driving schools, state Department of Motor Vehicles, or other similar agencies. Pill boxes, electronic reminders, or other similar medication planners may facilitate correct administration of medications and allow family or caregivers to help in setting up the medications properly. Other safety concerns, such as proper use of the stove, woodworking equipment, and access to firearms, should be discussed and appropriate supervision and/or limitations be arranged.
Caregiver Support
There is convincing evidence that the effects of AD are felt not only by the patient but also by the caregivers. Caregivers have increased depression, work absence, and health problems compared to those not caring for a family member with dementia. Clinicians and health care team members should direct caregivers toward educational resources on the disease, practical tips on helping someone with AD optimize their function, effective communication strategies, legal and financial planning, and the importance of caregiver health and social support. Use of respite services from family, friends, neighbors, home health agencies, and local adult day centers may allow for caregivers to take the appropriate time needed to maintain their own health and social connections. Local support groups allow for caregivers to share ideas and experiences. Other initiatives such as memory cafés, dementia-friendly communities, and online resources may provide caregivers important support and interaction.
PREVENTION
The Systolic Blood Pressure Intervention Trial Memory and Cognition in Decreased Hypertension (SPRINT-MIND) randomized trial has shown that targeting a systolic blood pressure goal of 120 mm Hg relative to what was at the time standard, 140 mm Hg, led to a statistically significant, 19% reduction in the incidence of MCI (see Chapter 79 for details). Currently, there are no established preventive therapies for AD and no approved medications to treat MCI. Evidence supports that therapies that either delay or prevent the onset of AD may need to be started in midlife in high-risk populations in order to significantly influence the onset and course of the disease. As the underlying pathologic changes that eventually lead to clinical AD begin decades before the onset of symptoms, primary prevention trials with conversion to AD as their primary outcome will be costly and time consuming. Integrating biomarkers with strong relationships to clinically relevant outcomes into such primary prevention trials may allow for earlier identification of disease-modifying effects of potential preventive therapies. Given the multifactorial nature of AD, future preventive strategies will most likely target a variety of mechanisms related to disease progression, similar to those used in cardiovascular disease prevention. Some potential prevention therapies currently under investigation include antiamyloid
therapies, vascular risk factor modification, anti-inflammatory medications, antioxidants, and lifestyle interventions such as exercise, social engagement, and cognitive stimulation.
SPECIAL ISSUES
Comorbidity
Managing comorbid illnesses in a person with AD can be challenging. Patients may forget to take important medications for comorbid conditions which, in turn, may exacerbate confusion. Persons with AD may not be able to remember symptoms related to other comorbid illnesses, such as recent episodes of chest pain, shortness of breath, or localization of arthritis pain. Thus, it is important to educate families and caregivers on how they can best assist their loved one in managing their comorbid illnesses. For example, a caregiver of an AD patient with diabetes may need to directly observe insulin administration and meal intake to maintain good glucose control. An AD patient with significant chronic pain may need their caregiver to write down the time of day that they become more agitated with the goal of optimizing the timing of their pain medications. Each management plan will need to be tailored to the AD patient’s comorbid illnesses and social situation, utilizing community resources as available.
Persons with dementia are more likely to experience delirium in response to medical illness or surgery. Thus, educating families that acute episodes of confusion may suggest a harboring infection or other illness may help families seek out appropriate medical care when watching for behavioral changes. Caregivers should be forewarned that an AD patient is at increased risk for delirium following surgical procedures and that interventions such as avoiding anticholinergic and sedative hypnotic medications, maintaining good sleep-wake cycles, optimizing pain control, using hearing aids and glasses as appropriate, and establishing daytime activities may help reduce risk of escalating postoperative delirium (see also Chapter 58).
Care Settings
Ensuring a safe living environment is a high priority for patients with any form of dementia including AD. Patients living in their own house or independent apartment may need additional safety measures implemented around their home, such as by posting emergency numbers on the wall, using
timers to remind them to turn the stove off, using medical alert systems, and optimizing use of home care services to assist with tasks such as bathing, cleaning, meal preparation, transportation, and medication administration. Once patients can no longer identify what to do in an emergency situation, then 24-hour supervision is recommended. Through partnering with family and friends and use of community resources, some individuals with AD are able to stay in their own home their entire lives. However, a variety of social circumstances, medical or behavioral issues, or economic limitations may necessitate that a person with AD move to a more structured, supervised setting. The choice of setting (eg, assisted living facility, skilled nursing facility, or locked dementia unit) varies from patient to patient and depends on the degree of cognitive impairment, cultural preferences, comorbid illnesses, economic resources, and behavioral and safety concerns.
Palliative and End-of-Life Care
Upon diagnosis of AD, many patients and families have questions as to what to expect in the years ahead. Since the course of AD progression may depend not only on genetic and environmental factors but also comorbid medical conditions, the rate of decline is difficult to predict. Once an AD patient is medically treated and after all potentially reversible contributing factors have been addressed, obtaining repeat cognitive testing may give the clinician an idea of the trajectory of the individual’s decline and help inform the family on what to expect in the years ahead. Providing information to family and caregivers early in the disease course on end-of-life planning may help smooth this difficult transition later in the illness. Use of respite services, home health aides or family members, or palliative care may help the person with AD stay in the home longer. If their social network cannot support the patient as care needs increase, then nursing home placement or hospice care may be necessary. Caregivers of AD patients may go through feelings of guilt when a loved one is moved from home to a facility, so appropriate support should be provided. Capacity for decision making should be assessed regularly throughout the course of the illness with appropriate activation of advanced care planning when the patient is no longer able to make their own health care decisions.
Advanced dementia is associated with poor nutritional intake, urinary incontinence, skin breakdown, and infections such as pneumonia. Palliative and end-of-life care services are increasingly being used for patients with
end stages of AD and other forms of dementia. As the disease progresses, patients may reach a point when they are no longer able to express their needs. When patients are at a stage of disease where they no longer are able to engage meaningfully in social interactions or participate in self-care, then consideration should be given to discontinuing cholinesterase inhibitors and/or memantine therapy. At that point, medication regimens may be simplified to focus on therapies that optimize patient comfort. As swallowing difficulties develop, modified diets and one-on-one feeding may be needed to maintain a patient’s nutritional status. Feeding tubes are not recommended in end-of-life for patients with advanced AD as they do not prolong survival or increase comfort and have not been shown to reduce the risk of pressure sores, infection, or aspiration.
Hospice care can help with symptom management late in the course of the illness. Caregiver involvement in Alzheimer support groups can provide comfort during the unique grieving process related to dementia, as family and caregivers watch the cognitive and personality transformations in their family member with AD.
SUMMARY
AD is the leading cause of dementia with 44 million individuals currently affected worldwide. Unless effective preventive strategies are identified, it is anticipated that the prevalence of AD will double every 20 years. Given the widespread prevalence of AD and its impact on the well-being and quality of life of patients and their caregivers, it is critical for clinicians to be well-trained in identifying early cognitive changes, differentiating AD from other common medical and psychiatric conditions, diagnosing the disorder, and developing an effective management plan with their patients and families. Knowledge and use of educational and community resources can provide additional culturally tailored support to AD patients and their caregivers. In most situations, AD can be effectively diagnosed and managed within a primary care setting, through careful history-taking, a physical examination, and brief cognitive testing. Ancillary laboratory tests and neuroimaging can help differentiate between various causes of memory loss and different types of dementia. AD treatment involves not only pharmacologic therapy—with cholinesterase inhibitors, NMDA receptor antagonists, and soon aducanumab and potentially other monoclonal antibodies directed against amyloid—but also careful assessment of safety,
behavioral concerns, and education for the patient, family, and other caregivers. While preventive therapies have not yet been established, novel therapies are under investigation to delay or preferably arrest the development and progression of AD. Clinicians are encouraged to be active champions of educational and research efforts to improve early diagnosis, treatment, and prevention of AD by promoting clinical research participation among willing patients and families. Annual updates on large-scale initiatives such as the United States National Alzheimer’s Project Act (NAPA), Alzheimer’s Disease International’s World Alzheimer Report, the Alzheimer’s Association’s Facts and Figures, and other international collaborations and publications will keep clinicians informed on global efforts to optimize early diagnosis and effective care of patients at risk for AD and related dementias.
FURTHER READING
Albert MS, DeKosky ST, Dickson D, et al. The diagnosis of mild cognitive impairment due to Alzheimer’s disease: recommendations from the National Institute on Aging-Alzheimer’s Association workgroups on diagnostic guidelines for Alzheimer’s disease. Alzheimers Dement.
2011;7(3):270–279.
Callahan CM, Boustani MA, Unverzagt FW, et al. Effectiveness of collaborative care for older adults with Alzheimer disease in primary care: a randomized controlled trial. JAMA. 2006;295(18):2148–2157.
Cordell CB, Borson S, Boustani M, et al; Medicare Detection of Cognitive Impairment Workgroup. Alzheimer’s Association recommendations for operationalizing the detection of cognitive impairment during the Medicare Annual Wellness Visit in a primary care setting. Alzheimers Dement. 2013;9(2):141–150.
Hyman BT, Phelps CH, Beach TG, et al. National Institute on Aging- Alzheimer’s Association guidelines for the neuropathologic assessment of Alzheimer’s disease. Alzheimers Dement. 2012;8(1):1–13.
Jack CR Jr, Bennett DA, Blennow K, et al. NIA-AA Research Framework: Toward a biological definition of Alzheimer’s disease. Alzheimers Dement. 2018; 14(4):535–562.
Kales HC, Gitlin LN, Lyketsos CG; Detroit Expert Panel on Assessment and Management of Neuropsychiatric Symptoms of Dementia. Management of neuropsychiatric symptoms of dementia in clinical settings: recommendations from a multidisciplinary expert panel. J Am Geriatr
Soc. 2014;62(4):762–769.
Livingston G, Huntley J, Sommerlad A, et al. Dementia prevention, intervention, and care: 2020 report of the Lancet Commission. The Lancet. 2020;396:413–446.
McKhann GM, Knopman DS, Chertkow H, et al. The diagnosis of dementia due to Alzheimer’s disease: recommendations from the National Institute on Aging-Alzheimer’s Association workgroups on diagnostic guidelines for Alzheimer’s disease. Alzheimers Dement. 2011;7(3):263–269.
Norton S, Matthews FE, Barnes DE, Yaffe K, Brayne C. Potential for primary prevention of Alzheimer’s disease: an analysis of population- based data. Lancet Neurol. 2014;13(8):788–794.
Sachdev PS, Blacker D, Blazer DG, et al. Classifying neurocognitive disorders: the DSM-5 approach. Nat Rev Neurol. 2014;10(11):634–642.
Sperling RA, Aisen PS, Beckett LA, et al. Toward defining the preclinical stages of Alzheimer’s disease: recommendations from the National Institute on Aging-Alzheimer’s Association workgroups on diagnostic guidelines for Alzheimer’s disease. Alzheimers Dement. 2011;7(3):280– 292.
US Department of Health and Human Services National Alzheimer’s Project Act. http://aspe.hhs.gov/national-alzheimers-project-act. Accessed September 10, 2015.
US Food and Drug Administration Postmarket Drug Safety Information for Patients and Providers: Aducanumab (marketed as Aduhelm) Information. https://www.fda.gov/drugs/postmarket-drug-safety- information-patients-and-providers/aducanumab-marketed-aduhelm- information
Zetterberg H, Bendlin BB. Biomarkers for Alzheimer’s disease—preparing for a new era of disease-modifying therapies. Molecular Psychiatry.
2021;26:296–308.
Chapter
Behavioral Symptoms of Dementia and Psychoactive Drug Therapy
Carol K. Chan, Constantine G. Lyketsos
EPIDEMIOLOGY
It is estimated that 5.3 million Americans live with Alzheimer disease (AD), the most common cause of dementia, and that 13.8 million people older than 65 years will be diagnosed in the United States by 2050. In the United States, annual health care costs for persons with AD are more than $172 billion, including $123 billion in costs to Medicaid and Medicare alone.
Neuropsychiatric symptoms (NPS) affect almost all persons with dementia over the course of illness. Although cognitive deficits are the hallmark of dementia, almost 98% of patients with AD experience depression, agitation, anxiety, psychosis, hallucinations, apathy, eating disorders, disinhibition, and/or sleep disturbances. Depression, apathy, and anxiety are the most common NPS in dementia. NPS are also present in the prodromal or mild cognitive impairment (MCI) stages of dementia.
Depression and irritability are common even prior to the onset of MCI and dementia and appear to be the first symptoms of well over half of people who later develop dementia. Late-life onset of NPS of any severity in individuals without dementia, lasting for over 6 months, that are not attributable to another concurrent psychiatric disorder (such as major depressive disorder) are now referred to as mild behavioral impairment (MBI). While NPS are seen at all stages of dementia, including prior to cognitive decline, the severity of NPS increases with progressive cognitive decline both in community and nursing home populations.
The impacts of NPS on both patients and caregivers are significant: They are associated with worse quality of life, increased mortality, accelerated disease progression, and increased cost of care and caregiver burden/stress. Difficult behaviors and psychotic symptoms are among the highest determinants of institutionalization.
NPS are broadly categorized into four groups: (1) affective and motivational symptoms, such as depression; (2) psychotic symptoms, such as delusions or perceptual disturbances; (3) disturbances of basic drives, including feeding and sleeping; and (4) disinhibited and other socially inappropriate behaviors. Examples of symptoms in each category of NPS are in Table 60-1.
TABLE 60-1 ■ NEUROPSYCHIATRIC SYMPTOMS
DOMAIN
l:XAMl'LE:5 סF 5'ו'MPTOl,15
l)�pr�iונו\/dysןנllונ.tii'I I..Olil'lתO(ld
Teי�rful1�
)ןjן_ןןg� In $l«p, app�tl�. c;תcrgy
Cgi!tive tbo1rglנl� abolfl hlm/h�nclfteg, pulting him/bcrses]f dQ'1'f.ח, fccli11,g like� f11ill!lre, f�elln g Likc fu.�/Selרc deserv.:-s tQ b.: puנרi�IIM, fe,e,\ln llkt- d1.1:-� ונינily wnutd bc b�tler off�_·t_llונ_u_[_ll_im_lll_e_r)'-.
Aרנ;uely
Ap:r1l1yliונdנfferenc"'
:ןבr. U�J\tly ;ן; kiונ,gfur ttג.SS\ו.tQD�
י :ull)· b,ecom ng !.11'$!:!t \\r/l(!ננ sepגraו.�d f:roנ:n.c�r1.יglver
Wu.rr ;1l:זiנ11t p!'.'lruן f !.-"V'f.-n\ll
Peri.:נds off,יll]iחg-sll11,k,y, unabl:ll� rזבlג�. f.וc'litרg �-SSj...cly IM;Soב Avoid.ן,וc • of cl:rו:aLn iיlilces ur 'itwtlon: 111,'1 il�ט- rt�t\rסus,ו� 1ndiffim:�1w
rנi lוו! r • t iח ,1 tivtti�
:Pמor .,ר�otiwזlton
ws spon1.1נurou (ו:g, !e likely10 i11i!jat oo,wers.-וion)
lrrilJbוliוy/l�bility זily �et טr fזustr.11"d
lliןןנid dנ.nng' i111nood (;:,g, &\!lddtנ1 fi-1$.b oft!ngו:r(l',leJ &ות�Utllin�)
בmp,n[icrרce t�, ll11viווg diffict.1]1y cnpingwith sm::ill dיי-1.i�נ
A(:ttt'lljo11/Qggressio,1 Arg,i.1iנ�
l?.וcing
ס-�rווpti.,•c \'ז:IC.11li;:.-1.tiס.m:
Plב)יslrnl.:.ggrr,ssוon [,eg, fJtrowוng tlוings".נ.tlenוpttng tס lיttr other.s) lזcje [io,ו o c,וre t!J'g, b;iוlנJ r��ch"ng,iננ.gdoוlגe:ר.)
Ri, �1Jt1\' יijl.l �tk>
Di i l'ו:lbi(Ll)ll
1-1111.::lng,op,e1רl}!ebaul \ltr}' pemנווal,or prl"i'גנt{', m1111ren O�·�rl�niliiari[1•wiוJן stro1,וge:ts
lmp111iskvc bd1 'l'ior.1
o...�rtיי;iti11g
0 •crSpc,rdi1ר
Nlgtוttin1.eb-
lt.נ.,•io11S Iוi.!T'נ.eull י f;illiוו,g;}Sloop
·.aזly 8wokc11ing
lt,���nig,htlimc: <i\\'<lkl!'Lוins (n�um th�n geוtiווg up oווct OJ" [WiCI! !D שכו! וJw ו,�,h ro<גתנ and fsllirוg b�ck
i!וSl�ep, lתןm�d\;נ,lely)
Ex .ivi: וmJוpiובg dufiLןg ihe d.J,y
Appז:litef�aling
Maro dlst1ו,ו1hד"Ltwe
J-i:11llu�lnבti!)n�
'\ cis)1t ,:;h�n��
l.o��ן.ךf aקpct(t OI'" iiiGl'Co$� i ח pגpct!!
01,ןng�in tי;,יpc�of footl 11.::/.sh�ג1kcs (cבו,"יeMiחg l.oo mש1y �iיr«ts)
\\,ander1rנg
Ruוmםaging
P:1ו ing
Vvi1it tliווi� repe�le-diy (eיg. pickiחg 11t tl1i,בg.�)
� t11 fidgetting
R11J]uciוג11-tiשu ('i\11o�ur iי 11.lוny �en�טrץ rnod,a�il)', bu\ visuגl a41d ;,udilorןי h Lluciח,;,tiwl.t (ii:.�-eei111or h�ar
l)�lLJiiLons
Elגlionl upoo i י
1:iגנlscיlכcli�fs (1!:.Sי th&I.olhci-sarc i;tcי.uling from h1111Jhc�or planנרing to11.um himlh�.r. יthBl.lhc,j, sכp<נusc is
hl)Yiתg 11 offair, th(II fl בו1.ly1גויCי1נוb<!lS pl�n oנt �b11ndסnirגg וllem)
Aנ,יוpear'mg �ivcly !Jappr
iוis thi11,gs lhut ;ire ח()[ [1[<!!;1!1111} sre וJרשt a,mnו.oח
Jnflatoo se,iגoo of s1alf (�. t�iזnin·s to haויc ו 1000 o'bנlitik�ar \..,ו:.נ.lth tli וiiר is tזuו:1}
T.;;וug!1lng inבp�pוiately C/i,i&1i'!'h $�/1\�c 1.>f lוu11נoו<
Learn the presentation, epidemiology, and pathophysiology of common neuropsychiatric symptoms (NPS), including behavioral disturbances, seen in patients with dementia.
Understand the best approach to evaluate NPS in patients with dementia and effective strategies to manage such symptoms.
Learn about the significance and efficacy of nonpharmacologic interventions.
Understand the appropriate indications, limitations, and adverse effects of pharmacologic interventions.
Key Clinical Points
NPS are seen in up to 98% of patients with dementia and are the result of high-order loss of behavioral control due to disease involvement of major brain networks and neurotransmitters.
Patients with NPS have higher mortality and progress more rapidly from mild to severe dementia.
Careful history-taking is essential and exclusion of delirium is paramount for proper diagnosis and management of NPS associated with dementia.
Onset of new NPS in patients with dementia, especially systematized delusions, can be mistaken for another psychiatric disorder, such as a major depressive disorder with psychotic features or schizophrenia.
Nonpharmacologic interventions should be the first-line treatment and antipsychotics should be avoided as much as possible, given their lack of efficacy in randomized trials and higher incidence of adverse treatment effects.
Appropriate indications for medications include failure of nonpharmacologic therapies and presence of NPS severe enough to interfere with the patient’s overall quality of life and function.
If medications are started, use slow titration, use the lowest effective dose, and reassess its risk/benefit ratio on a regular
LEARNING OBJECTIVES
basis.
PATHOPHYSIOLOGY OF NEUROPSYCHIATRIC SYMPTOMS
The underlying cause of NPS is multifactorial. Causal contributors include a combination of underlying brain circuitry disruptions, preexisting risk factors, and precipitating stressors. Disruptions in white matter and its associated cortices in sensory and limbic brain areas are thought to be involved in NPS in the context of dementia. In particular, disruptions to the corticocortical and frontal-subcortical circuits, key to regulating emotions and behaviors, have been implicated in the pathogenesis of NPS. AD pathologies, such as neuronal loss and neurofibrillary tangles, are abundant in the limbic system, which include the amygdala, basal forebrain, brainstem, and hypothalamus. The hypothalamus, among other things, is important for the regulation of appetite, circadian rhythms, and regulating emotional responses. Though these areas have been implicated in NPS, the degree to which pathologic involvement of these structures correlate with specific NPS has not been well studied.
Dysfunction in the projections of excitatory and inhibitory neurons from the brain stem to cortical regions modulating monoamines (dopamine, serotonin, and norepinephrine), glutamate, and acetylcholine, may also contribute to NPS. For instance, agitation and aggression have been associated with cortical dysfunction in the insula, amygdala, anterior cingulate gyrus, hippocampus, middle frontal gyrus, lateral frontal gyrus, and lateral temporal gyrus; these behaviors may be related to deficits in acetylcholine neurotransmission. Similarly, prominent aggressive behavior in patients with AD has been associated with loss of serotonin in the inferior frontal cortex. Psychosis is most prominently associated with functional deficits in the anterior cingulate cortex and frontal cortex, both of which receive dopaminergic innervation. Depressive symptoms have been related in part to disturbances in serotonin, norepinephrine, and dopamine, and have been associated with loss of serotonergic receptors in the hippocampus, noradrenergic neurons in the locus coeruleus, and serotonergic neurons in the raphe nucleus. The resulting imbalances of dopamine, noradrenaline, and
serotonin neurotransmitters lead to NPS. Apathy is associated with dopamine and noradrenaline; depression is associated with serotonin and noradrenaline, psychosis is associated with dopamine, while agitation and aggression are associated with dopamine, noradrenaline, and serotonin.
These neurological disruptions, in combination with other underlying risk factors (such as personality factors, resilience, and psychiatric comorbidities), increase an individual’s vulnerability to stressors or “triggers.” Categories of stressors include patient factors (eg, physical discomfort, unmet needs, medical illness), environmental changes (eg, overstimulation or understimulation), and interpersonal stressors (eg, unrealistic caregiver expectations, negative communications). Thus, responses to these stressors in individuals with disruptions in brain circuitry due dementia may manifest as NPS.
DIAGNOSTIC APPROACH
History Taking
Due to the nature of cognitive impairment, a patient with dementia may not be able to provide an accurate history because of lack of insight, memory loss, and/or language problems. It is therefore critical to involve a reliable and knowledgeable informant during history taking. This helps elucidate a clear timeline of symptoms—whether the onset was insidious versus abrupt, the frequency if symptoms are episodic, and whether there have been precipitating events. Common stressors or “triggers” associated with NPS are in Table 60-2 and should be considered during the history taking process. The severity of symptoms and associated distress (of both the patient and their caregiver) should be assessed. A thorough psychiatric and medical history is essential in considering differential diagnoses, which is discussed in greater detail below. An important part of history taking is asking caregivers how the behaviors are currently being managed and how the patient is responding to these interventions. Lastly, a careful history of physical symptoms should also be taken to determine whether medical causes and delirium may be contributing to the behavioral disturbance.
TABLE 60-2 ■ COMMON CONTRIBUTING CAUSES OF NEUROPSYCHIATRIC SYMPTOMS
Physical Examination
Physical and neurological examination should be completed as part of the assessment to identify factors that may contribute to or worsen NPS, such as physical discomfort or delirium. Physical findings such as signs of infection, shortness of breath, pain, fluid overload or new neurological deficits may point to delirium due to an acute medical condition.
Clinical Measurements
The Neuropsychiatric Inventory (NPI) and its variations can be a useful tool in quantifying NPS. The NPI is the most widely used instrument for measuring NPS in clinical research. It includes questions pertaining to changes in the patient’s behavior with screening for the presence of NPS, in addition to ratings of their frequency and severity. The Neuropsychiatric Inventory-Questionnaire (NPI-Q) is a brief version suitable for use in clinical settings. It is a self-administered informant-based instrument that measures the presence and severity of 12 NPS, as well as informant distress. The Neuropsychiatric Inventory-Clinician Rating (NPI-C) is a clinician version that includes expanded domains and items.
Medication Review
A careful review of medications, including the time course of symptoms in relation to recent medication changes, is needed. Certain classes and combinations of medications may contribute to NPS. Classes of medications most likely to be associated with delirium, largely due to anticholinergic effects, include opiates, anticholinergics, benzodiazepines, antihistamines, tricyclic antidepressants (TCAs), muscle relaxants, and antiepileptic medications. Common examples of anticholinergic medications are in Table 60-3. Behavioral changes associated with these medications may include sedation, changes in sleep-wake cycle, and worse confusion or agitation.
Medications used to treat Parkinson disease, such as dopaminergic agents, can precipitate impulsive behaviors. Anticholinergics, amantadine, dopaminergic agents, and catechol-O-methyl transferase (COMT) inhibitors, often used in Parkinson disease, can exacerbate psychotic symptoms, such as hallucinations and delusions.
TABLE 60-3 ■ MEDICATIONS WITH ANTICHOLINERGIC ACTIVITY, WHICH HAVE INCREASED RISK OF CAUSING DELIRIUM
Diagnostic Testing
For acute, new-onset NPS in dementia, work-up with a physical examination and laboratory studies is needed in many cases to evaluate for an underlying general medical cause. Laboratory testing typically consists of complete blood count, metabolic panel, liver function tests, and urinalysis/urine culture. Thyroid function tests, folate, vitamin B12, levels, toxicology, and
electrocardiogram may be considered as additional tests if indicated based on history and physical examination. If there are signs and symptoms of a respiratory infection, chest radiography is indicated. Brain imaging, such as
computed tomography (CT) or magnetic resonance imaging (MRI) may be indicated particularly if focal neurologic findings are present.
Electroencephalogram (EEG) is indicated if seizures are suspected. Cerebrospinal fluid analysis is rarely needed but indicated if meningitis or encephalitis is suspected.
DIFFERENTIAL DIAGNOSIS
Differential diagnosis for NPS includes: medical conditions, delirium and primary psychiatric disorders such as major depression, bipolar disorder schizophrenia, etc. Presentation with acute changes in cognition and behavior should raise suspicion for delirium, and the underlying medical cause for delirium should be investigated accordingly.
In formulating a differential diagnosis, one should keep in mind that there may be more than one cause. Older individuals with primary psychiatric disorders such as major depressive disorder, bipolar disorder, anxiety disorders, and schizophrenia can develop dementia or delirium superimposed on their psychiatric illnesses. There are also late-onset forms of these disorders. Late-onset depression and anxiety are common among older adults (and among individuals with dementia), while late-onset bipolar disorder and schizophrenia are rare. However, if new psychiatric symptoms emerge in the setting of dementia, the underlying etiology is likely dementia as opposed to a separate, concurrent psychiatric disorder.
Delirium
If a patient exhibits a sudden drop or fluctuation of cognition, delirium should be considered in the differential diagnosis. Delirium is characterized by acute onset over hours or days with a fluctuating course. Cognitive deficits in delirium typically include inattention, disorganized thinking, or an altered level of consciousness. Delirium has a wide range of presentations, including hyperactive (eg, psychosis and agitation), hypoactive (eg, severe apathy, lethargy, and withdrawal), and mixed presentations where features of both hyperactive and hypoactive delirium are present. It is a syndrome with multiple potential etiologies. Providers should rule out delirium first in the setting of any acute change in cognition and/or consciousness, and assess for potential etiologies such as medication withdrawal, metabolic imbalance, infection, and intoxication.
Once the underlying cause is corrected, a patients’ mental status and behavior should improve, though it commonly persists well beyond correction of the underlying cause—in some cases weeks to months. Some patients take a “cognitive hit” and may not fully return to their prior baseline. Short-term, low-dose use of oral or intravenous haloperidol or atypical antipsychotics can be considered for management of significant agitation in the context of delirium. Commonly used medications and starting doses include the following:
Haloperidol (0.25–0.5 mg oral or intravenous every 6 hours as needed for agitation)
Quetiapine (12.5–25 mg oral every 6 hours as needed for agitation)
Olanzapine (2.5 mg oral or 1.25–2.5 mg intramuscular every 6 hours as needed for agitation)
Risperidone (0.25–0.5 mg oral or intravenous every 6 hours as needed for agitation)
Depressive Symptoms
Major depressive disorder is a heterogeneous syndrome with a wide severity range. Symptoms include sleep disturbance, reduced energy, anhedonia, guilt, and suicidal ideation. It may include psychotic symptoms such as delusions and hallucinations. In individuals with dementia who experience new symptoms of depression, it can generally be presumed that the depressive symptoms are due to underlying neurodegeneration. Depressive symptoms of poor concentration, memory deficits, and anhedonia may be confused for cognitive deficits and apathy that are common to dementia.
Up to 50% of individuals with dementia will suffer from depression over the course of their illness, and it is one of the most common psychiatric symptoms in early dementia. Depression in dementia is associated with increased health care utilization, greater severity and acceleration of cognitive impairment, decreased quality of life for the affected individual and caregiver, and increased risk of suicide.
Among individuals with AD, risk factors for depression include older age, female gender, personal history of depression, and less education.
Depression in individuals with dementia may go undetected because the symptoms they experience as part of a major depressive disorder can differ considerably from older individuals with normal cognition. For example,
depression in dementia is more likely to present with agitation, irritability, and anxiety. Validated scales such as the Cornell Scale for Depression in Dementia and Geriatric Depression Scale (GDS) may be helpful as measures of depression.
Anxiety Symptoms
Symptoms of anxiety are relatively common in older adults and may be accompanied by agitation. They are often associated with comorbid depression and tend to go unrecognized. Similar to depression in dementia, new anxiety syndromes observed in dementia are most likely related to the underlying dementia as opposed to a separate anxiety disorder. The most common anxiety syndromes include generalized anxiety disorder, panic disorder, and phobias. Patients with posttraumatic stress disorder may become agitated when reexperiencing traumatic and painful memories (“flashbacks”). These episodes may become difficult for patients with dementia to distinguish from reality due to cognitive impairment (eg, short- term memory loss and disorientation) and because remote memory tends to be preserved until late in the course of many dementias. Clinicians should routinely inquire about a history of trauma when evaluating patients with dementia who are agitated.
Psychotic Symptoms
Psychotic symptoms, such as delusions and hallucinations, occur in 20% to 30% of patients with AD and in over 50% of patients with dementia with Lewy bodies (DLB) or Parkinson disease dementia. They may cause distress for the patient (and caregiver), and often contribute to the development of agitation.
Delusions are more common than hallucinations in dementia and can stem from cognitive impairment. For example, memory deficits may lead to the fixed and false belief that a misplaced item was stolen. Individuals with agnosia may misidentify formerly familiar individuals or objects. Less common delusions include delusions that they are being poisoned or that their partner has been unfaithful.
In AD, hallucinations are more likely to be visual than auditory, and tend to occur in the moderate to severe stages of dementia. In many circumstances, auditory hallucinations do not cause distress to the patient and as such, do not need to be treated pharmacologically. Visual hallucinations are more
common in DLB than AD and may be the most clinically useful feature to distinguish DLB from AD. In DLB, hallucinations tend to be well-formed images of people, animals, or objects, but can also appear as simple shapes in the corner of one’s eyes. Hallucinations are typically not distressing in DLB unless they are accompanied by delusions or occur in severely demented individuals.
Of note, many medications used to treat Parkinson disease, such as anticholinergic agents, amantadine, dopaminergic agents, and COMT inhibitors, can exacerbate visual hallucinations and delusions.
GENERAL MANAGEMENT
Dementia Care Models
The “DICE” (Describe, Investigate, Create, and Evaluate) approach (Table 60-4) provides a useful mnemonic for a methodic approach to the management of NPS. The “describe” phase involves characterization of the NPS, enabling the provider to identify underlying patterns or contributory factors to the behavior and establish treatment goals. In the “investigate” phase, the provider examines the patient and identifies potential underlying and modifiable causes. Behavioral disturbances are often multifactorial. As described in the previous section, common contributing factors may include delirium, pain, undiagnosed medical conditions (eg, dehydration, infection, constipation), medication side effects, underlying psychiatric comorbidity, sensory impairment, and environmental factors. In the “create” phase, the patient, caregiver, and treatment team collaborates to design and implement a treatment plan. This may involve both pharmacologic and nonpharmacologic interventions. The final step of the “DICE” approach is for the provider to “evaluate” whether recommended strategies were attempted and effective. If the caregiver did not implement the intervention, the provider should attempt to understand the barriers and brainstorm solutions with the caregiver. If interventions include a psychotropic medication, the provider and caregiver should monitor for changes in behaviors and potential side effects and evaluate the need for continued medication use on an ongoing basis.
TABLE 60-4 ■ DESCRIBE, INVESTIGATE, CREATE, AND EVALUATE (DICE) APPROACH
J11vestigate
Create
Evaluat
Co,11text of behavior
Socia] a:11d pJ1ysica1en iron.n1e11t
Pa:tient per p ctive
Degree of dist:ress to,patie11t and careg1ve1·
Provide1· iגנvestlgates possible cause,sof problem
behavior
'Undiagז1osed medical co11ditions Unde1·lying psychiat1·ic o,1no,rb,idity Limitatt,ons in fu1ןctional abil·ity P,0,01·sleep l1ygiene
Borר ,do1� fear) en of lo of 011t1rol
M d1c-tio,11side e6·ects Sen ory1mpaג ment E1ןvj ,,o,nmental factor. Uruזגet 11eed.
Provide1·) cai·egiver, a11d team collaborate to c1·ea e a11d hnp]e111e11t •rea meזןt plaנו Respס•ג1d to גמ dical problem
t1·a.t gize behavioral inte1·vention Provide caregiver edt1catio,11 and suppo1·
Create meaningf1il act1viיties fo1· the pati 11t
hnplifyiג1g tasks
Ensu1·i1ig·the eתvi1·01 m nt iי sa e E11ha11cing com1nt1111cati,01ר with t]1e patieזוt I1.1c1·easing or d c a i11g tinרt1Jatiorנ in tlוe
en •ironme11t
P ovid 1 valuat whether the int rv ·ntio,n have been iזnplemented by caregiver and whetl1e1·,hey are effective
Dafa frסזזו Kale� HC Gitll11 Lכ4.at�1Js CG et al. i\'!.anage111e11t of נןef11'()pJyd1iatri, syt1וptסוז1s af dcn1enti11 iוו c1irנi al settirrgs: 1-ectנזו1111e11datiסז1sfro1ו1 a וrilוltidi5.cip1irוary exן,ert pt111el. J Aנו1 Geri(itr• OG'. 1014;62(4):762-769.
Support for Patients and Caregivers
The Alzheimer’s Association estimates that 60% to 70% of older adults with AD and other dementias live in the community and are cared for by family and friends. Nearly all have unmet needs regarding care, services, and support. The Maximizing Independence at Home (MIND at Home) Study found that 99% of patients had unmet needs. The most common domains with unmet needs included: safety (personal and home), general health and medical care, meaningful activities, legal issues, and advanced care planning. Higher unmet needs were reported in individuals who were minorities, had lower income, had fewer impairments in activities of daily living (ADLs), and more symptoms of depression. Caregivers, too, reported unmet needs. Ninety-seven percent reported having one or more unmet needs, with the most common domains being resource referrals, caregiver dementia education, mental health care, and general medical health care.
These findings highlight some of the areas that a dementia care team should address for every patient (Table 60-5). The care team should work with the patient and caregiver to maintain the patient’s optimal physical and mental health, provide a safe environment that maximizes the patient’s physical and cognitive abilities, and preserve the patient’s dignity. Ideally, such an environment would allow patients to receive support for their ADLs and instrumental activities of daily living (IADLs), be well-nourished, maintain good sleep hygiene, and engage in activities and socialization. In- home activities tailored to the interests and capabilities of patients with dementia have been demonstrated to significantly increase the patient’s engagement, reduce NPS, and reduce caregiver burden. In-home occupational therapy assessments, using a functional assessment method such as the Assessment of Motor and Process Skills (AMPS), can provide useful data about a patient’s care needs as well as assess the safety of the home environment. For more information about providing supportive care for patients, we refer you to the book The 36-Hour Day by Mace and Rabins (2017) and the Alzheimer’s Association website (www.alz.org).
TABLE 60-5 ■ SUPPORTIVE CARE FOR THE PATIENT WITH DEMENTIA
For caregivers, providing education about the disease and skills training in communication for dementia can significantly improve the quality of life for patients and increase positive interactions. A recent meta-analysis examining different caregiver interventions found that the most beneficial interventions address caregiving competency initially, then gradually addressing the care needs of the patient. In the United States, about 25% of people with dementia receive care from nonfamily caregivers, such as home health aids or nursing assistants hired directly by the family to care for a patient in the home setting. These caregivers may not always receive the necessary training to provide dementia care and face a challenging work environment. For nonfamily caregivers, interventions focused on facility staff training programs have been associated with reduction of NPS in patients and staff wellbeing. Some caregivers may benefit from encouragement to attend to their own emotional and personal needs. They should be
encouraged to maintain their own medical appointments, hobbies, and social network. Utilization of respite from caregiving as needed, which can provide temporary relief for caregivers through provision of substitute care, should also be encouraged. Support groups can provide caregivers with the opportunity to share concerns, personal feelings, and seek support from peers. Psychological interventions, such as counseling and supportive therapy, have also been associated with positive impacts on psychosocial outcomes between patients and their informal caregivers.
NONPHARMACOLOGIC MANAGEMENT
Nonpharmacologic interventions are first-line in the management of NPS in dementia patients to avoid the risks and side effects associated with medications. An exception is emergency situations in which the safety of the patient or others is compromised, for example due to severe agitation.
Multiple small studies report modest improvement in quality of life and NPS with reminiscence therapy, music therapy, bright light therapy, aromatherapy, pet therapy, physical therapy, occupational therapy, exercise training, speech therapy, and multisensory stimulation. However, most studies of nonpharmacologic interventions included patients with mild to moderate NPS and few controlled data suggest that these nonpharmacologic interventions provide longer-term benefits, outside of the treatment session. Psychotherapy can be useful, particularly for patients in the early stages of dementia who are demoralized, depressed, or anxious.
Nonpharmacologic interventions that can be implemented by caregivers for common NPS are summarized in Table 60-6. Unfortunately, some of these can be difficult to implement in real-world settings or may not provide sufficient control of disruptive NPS. There is preliminary evidence that family caregiver interventions, such as promoting helpful coping, effective communication and scheduling of pleasant activities, and tailored activities for persons with dementia and their caregivers may improve quality of life in people with dementia living in the community.
TABLE 60-6 ■ GENERAL BEHAVIORAL STRATEGIES FOR COMMON NEUROPSYCHIATRIC SYMPTOMS
PHARMACOLOGIC MANAGEMENT
Some patients who do not respond to nonpharmacologic approaches may require targeted medication therapy. It should be noted that while psychotropics are frequently prescribed for NPS in dementia, there are currently no pharmacotherapies with US Food and Drug Administration (FDA) approval for this purpose. Several classes of “psychiatric” medications have been studied specifically for NPS in dementia, but treatment response has been disappointing, with few randomized clinical
trials (RCTs) showing clear efficacy of antipsychotics, antidepressants, or anticonvulsants. Risks and benefits of each class of medication are discussed in detail below.
Due to the inherent risks of using medications to treat NPS in dementia, nonpharmacologic approaches should be first-line therapy. Psychotropics should be used only after other efforts have been made to mitigate NPS, with three exceptions: (1) clear-cut major depression; (2) psychosis causing harm or with significant potential of harm to self or others; and (3) aggression causing harm or risk of harm to self or other.
Clinicians should obtain informed consent for medication use from the patient or their legal representatives after discussing potential risks (including so-called “black box warnings” of mortality in the case of antipsychotics) and benefits. Medications should be slowly titrated using starting doses appropriate for older individuals. Although a slow titration schedule is recommended for older adults, medications should still be increased as tolerated to produce an improvement in target symptoms.
Behavior logs kept by the caregiver, documenting the timing and pattern of behaviors, may help identify optimal times for medication administration (eg, timing doses of medication to target difficult behaviors such as prior to bathing or dressing).
Clinicians should pay close attention to side effects and educate the caregiver about what to look out for. It is important to keep in mind that older individuals are at higher risk of adverse effects to medications due to (1) decreased renal clearance and slowed hepatic metabolism, (2) medical comorbidities, (3) potential for drug–drug interactions due to being on multiple medications, (4) increased risk of orthostatic hypotension and falls due to decreased autonomic regulation, and (5) elevated risk of delirium. The American Geriatrics Society Beers Criteria for Potentially Inappropriate Medication Use in Older Adults provides an overview of medications and guidelines on when they should potentially be avoided older adults.
In prescribing psychotropic medications for NPS, clinicians should keep potential pitfalls in mind. Though there are no FDA-approved medications for NPS in dementia, clinicians routinely prescribe psychotropics despite safety and efficacy concerns. In a study of newly admitted nursing home residents, only 12% received nonpharmacologic interventions within the first 3 months of admission, while 71% received at least one psychotropic medication. Moreover, more than 15% were taking four or more
psychotropics. 64% and 71% of residents treated with psychotropics had not received psychopharmacologic treatment or a psychiatric diagnosis, respectively, for the 6 months preceding admission. Medications are widely used empirically based on similarities to other psychiatric conditions, such as depression, anxiety, or psychosis. Furthermore, psychotropics may be used without systematic identification of potential underlying causes of behaviors. NPS in dementia are the result of responses to stressors in individuals with increased vulnerability due to brain circuitry disruptions.
Nonpharmacologic strategies targeted at mitigating or eliminating these triggers should be used and evaluated for effect first. Most dementias are progressive in nature and NPS can fluctuate over time. Thus, clinicians and caregivers may find themselves trying to manage several evolving behaviors simultaneously, often with multiple medications, leading to increased risks of adverse effects from medications and unpredictable results. Further complicating this matter is that psychotropics can cause side effects that exacerbate other domains of NPS. For example, antipsychotic medications may contribute to sedation, reversal in sleep-wake cycle, and delirium. If medications are indicated, it is important to follow several guidelines as listed in Table 60-7.
TABLE 60-7 ■ GENERAL GUIDELINES FOR STARTING PSYCHOTROPIC MEDICATIONS FOR NEUROPSYCHIATRIC SYMPTOMS
Antipsychotics
Antipsychotics provide (at times considerable) benefit to some patients with psychosis and agitation in dementia but have been associated with higher risk of death, cardiovascular disease, and cerebrovascular disease in this population. The use of antipsychotics in patients with dementia remains controversial because their efficacy is modest and they have been associated with adverse effects, including weight gain, parkinsonism, rapid cognitive decline, a higher risk of cerebrovascular or cardiovascular events, QTc prolongation, and mortality. The DART-AD trial reported an increased long- term risk of mortality in patients with AD prescribed antipsychotics. As a result, the FDA issued a “black box warning” for the use of atypical and conventional antipsychotics in treating patients with dementia-related
psychosis. Retrospective cohort studies evaluating all-cause mortality in older individuals have found higher mortality risk with haloperidol than with risperidone, olanzapine, aripiprazole, ziprasidone, or quetiapine. The highest risk of mortality occurs soon after therapy is initiated, and there appears to be a dose-dependent relationship to mortality. Given these risks, the American Psychiatric Association (APA) has published practice guidelines on the use of antipsychotics to treat agitation or psychosis in patients with dementia, emphasizing judicious use and reserving antipsychotics for when nonpharmacologic approaches have been tried, and when symptoms are severe and dangerous. They also recommend that if there is no significant response after a 4-week period, the medication should be tapered and withdrawn. In patients whose antipsychotic medications are being tapered, symptoms should be assessed at least every month during tapering and for at least 4 months after the medication is discontinued. Long-acting injectable antipsychotics should not be used unless administered for a co-occurring chronic psychotic disorder.
If a risk/benefit assessment favors the use of an antipsychotic for NPS in patients with dementia, treatment should be initiated at a low dose and titrated to the minimum effective dose as tolerated. The choice of antipsychotic should be guided by the target symptoms, side-effect profile, and formulation (eg, consider using medications that have solution or dissolving forms). The recommended dosing in patients with dementia and side effects of atypical antipsychotics discussed in this section are summarized in Table 60-8.
TABLE 60-8 ■ SELECT ANTIPSYCHOTIC DOSAGES AND SIDE EFFECTS
The Clinical Antipsychotic Trial of International Effectiveness— Alzheimer’s Disease (CATIE-AD) was designed to compare the efficacy of antipsychotics to placebo in reducing psychotic symptoms or behaviors of agitation/aggression in outpatients with AD. Olanzapine and risperidone showed the most benefit in NPS reduction. However, the magnitude of improvement was modest, and use of atypical antipsychotics over 36 weeks was associated with worsening cognitive function at a magnitude consistent with 1 year’s cognitive deterioration on placebo, and there were no observable improvements in functional measures. Similarly, an Agency for Healthcare Research and Quality (AHRQ) Comparative Effectiveness Review found that the most effective antipsychotics include risperidone (for psychosis and agitation), olanzapine (for agitation), and aripiprazole (for overall NPS). A meta-analysis of studies examining antipsychotic discontinuation in patients with dementia found that although the proportion of patients with NPS severity worsening was higher than those who
continued on antipsychotics, no statistically significant difference in NPS severity was observed.
Due to the elevated risk of extrapyramidal side effects, typical antipsychotics, such as haloperidol, should not be used as a first-line neuroleptic in nonemergent situations. Extrapyramidal symptoms (EPS) include parkinsonism, tardive dyskinesia, akathisia, and dystonias. Particular care should be taken when considering the use of antipsychotics in patients with DLB and Parkinson disease, as they are extremely sensitive to extrapyramidal side effects, with the exception of clozapine. Extrapyramidal side effects can include parkinsonian symptoms, acute dystonia, and neuroleptic malignant syndrome. Due to its low risk for causing EPS, clozapine is a good option for psychotic symptoms in DLB. However, the potential for serious adverse effects, particularly neutropenia, and the need for close laboratory monitoring makes it more challenging to use in routine clinical practice. Pimavanserin was approved by the FDA in 2016 for treatment of hallucinations and delusions associated with Parkinson disease psychosis. Clozapine and quetiapine, though not specifically approved for use in Parkinson disease, are less likely than other antipsychotics to worsen parkinsonian symptoms.
Antidepressants
Aside from depression, antidepressants are also used to treat other NPS, including anxiety, agitation, and apathy. Up to 50% of community dwelling older adults with dementia in the United States are prescribed antidepressants. Although systematic reviews of the literature have found limited evidence to support the efficacy of antidepressants in the treatment of depression, anxiety, and apathy in dementia, there is emerging evidence for their use in agitation. A summary of dosing and side effects of antidepressants is in Table 60-9.
TABLE 60-9 ■ ANTIDEPRESSANT ORAL DOSAGES AND SIDE EFFECTS
Selective serotonin reuptake inhibitors Selective serotonin reuptake inhibitors (SSRIs) act on the serotonergic system by blocking its presynaptic reuptake. They are considered first-line therapy for late-life depression and are generally well tolerated compared to other classes of antidepressants such as TCAs. In the context of dementia, however, systematic reviews and meta- analysis have found little or no difference between groups treated with SSRIs and placebo in depressive symptoms. Fluoxetine has not been associated with improvements in NPS. Though initial findings for sertraline reported promising effects for depression, subsequent larger studies have not found evidence that sertraline is superior to placebo in the treatment of depression in dementia. There is growing evidence, though, for a role of SSRIs in the management of agitation in dementia. The Citalopram for Agitation in
Alzheimer’s Disease (CitAD) study, a randomized placebo-controlled trial, found that participants in the citalopram group had significant improvement in agitation, measures of caregiver stress, improved performance on ADLs, and reduced use of emergency medications for agitation (lorazepam) compared to placebo. However, citalopram was associated with side effects including QT prolongation and worsened cognition, which may limit its use. Further analysis found that cognitive and cardiac changes were primarily associated with the R-enantiomer, whereas clinical improvements were primarily associated with the S-enantiomer, escitalopram. The effectiveness of escitalopram for agitation in AD is now being studied in the Escitalopram for agitation in Alzheimer’s disease (S-CitAD) trial.
Though SSRIs are generally well tolerated, common side effects include nausea, diarrhea, anorexia, drowsiness, lethargy, sleep disturbance, tremor, and anxiety. These side effects usually improve after 1 to 2 weeks.
Hyponatremia may occur with SSRI treatment and should be assessed particularly in older adults. Some SSRIs, such as paroxetine, have anticholinergic properties and should be avoided in older adults.
Serotonin norepinephrine reuptake inhibitors Serotonin norepinephrine reuptake inhibitors (SNRIs) include duloxetine, venlafaxine, and desvenlafaxine. While they are used in late-life depression, evidence for their efficacy in NPS in dementia is limited. A small 6-week randomized placebo-controlled trial of venlafaxine (14 in venlafaxine group, 17 placebo) did not find a significant difference in depressive symptoms between groups. A longer 12- week randomized double blind trial of 20 patients with moderate AD did not find any benefit in venlafaxine for symptoms of depression compared to baseline, though statistically significant change in cognitive and functioning scales were observed. Common side effects of SNRIs include gastrointestinal (GI) distress, headaches, sexual dysfunction, and hyponatremia.
Mirtazapine Mirtazapine is a nonadrenergic and specific serotonergic antidepressant. In the Health Technology Assessment Study of the Use of Antidepressants for Depression in Dementia (HTA-SADD) trial, there was no difference between mirtazapine and placebo, or mirtazapine and sertraline on depressive symptoms. Subsequent subgroup analysis found that mirtazapine reduced depressive symptoms over placebo in participants with primarily affective symptoms and severe endorsement of psychological symptoms, and an absence of sleep problems. A small, open-label study with
16 patients found a significant reduction of agitation. Mirtazapine can be sedating at lower doses and is sometimes used as a sleep aid. However, a more recent randomized placebo-controlled trial of mirtazapine in patients with dementia and sleep disturbances (24 in mirtazapine group vs 16 placebo) found no benefit over placebo in improving sleep duration of efficiency. The group receiving mirtazapine experienced increased daytime sleepiness, limiting its use.
Bupropion Bupropion is a dual inhibitor of norepinephrine and dopamine reuptake. Though it has been shown to be effective in depressed older adults, its use specifically for NPS has not been extensively studied. One RCT reported it was ineffective for the treatment of apathy in Huntington disease. Buproprion should be avoided in individuals with a history of seizures or psychotic symptoms.
Tricyclic antide pressants A number of older and smaller studies have investigated the use of TCAs in dementia. Two RCTs explored the effects of imipramine on depression in dementia reported no benefit over placebo. A RCT of clomipramine reported that participants receiving the treatment of significantly improved depressive symptoms on the Hamilton Depression Scale and rate of remission. A small RCT of desipramine in individuals with moderate AD found improvement in measures of function in the treatment group, but no differences in symptoms of depression. The use of TCAs is limited by their high risk of adverse events, relative to previously discussed classes of antidepressants.
Monoamine oxidase inhibitors The monoamine oxidase inhibitor (MAOI) moclobemide was found to be superior to placebo for depressive symptoms in one multisite, double-blinded, placebo-controlled trial of 649 older adults with symptoms of depression and cognitive decline. However, this antidepressant is not marketed in the United States. The use of MAOIs is limited by the required adherence to a low-tyramine diet and the risk of serotonin syndrome with concomitant use of other serotonergic antidepressants.
Anticonvulsants and “mood stabilizers” Many anticonvulsants are approved for used as so-called mood stabilizers in bipolar disorder. However, there is limited evidence for efficacy in treating NPS in dementia, with more evidence that their use can be harmful. Though initial case studies of valproic acid suggested possible efficacy for treatment of agitation in dementia, recent
trials and a recent Cochrane meta-analysis suggested that valproate, when used solely for “organic brain disorders,” is ineffective for treating agitation in people with dementia. Furthermore, valproic acid is poorly tolerated with numerous adverse effects such as sedation, diarrhea, ataxia, and thrombocytopenia.
Carbamezapine has been shown to improve NPS in one small (n=51), randomized trial in patients who were resistant to treatment with antipsychotics. Other trials have reported no benefit over placebo, but increased adverse effects including sedation, disorientation, confusion, and ataxia. Further limiting its use in older adults, carbamazepine is a potent hepatic enzyme inducer, with high potential for drug–drug interactions, is an auto-inducer (making its titration challenging), and is also associated with bone marrow toxicity and hyponatremia.
Studies examining the use of oxcarbazepine and topiramate for behavioral disturbances in dementia are limited. There has only been one randomized controlled study of oxcarbazepine in this context, where no difference was observed compared to placebo for aggression or agitation, while adverse events such as sedation, fainting, and ataxia occurred more frequently in the treatment group. No placebo-controlled trials involving topiramate for NPS in dementia have been conducted. One randomized study found it to have superior efficacy compared to risperidone.
The efficacy of gabapentin in treating NPS has been described in several case reports and open-label trials, including for treating behaviors of sexual inappropriateness. However, despite its frequent off-label use, there are no current randomized controlled trials evaluating the use of gabapentin.
Case reports, retrospective chart reviews, and one open-label trial (which allowed for concomitant use of other psychotropic drugs) have reported modest clinical improvement in NPS using lamotrigine. No randomized-controlled trials have been conducted to date. The need for a slow titration schedule due to the risk of Stevens-Johnson syndrome may limit its use in the acute setting.
Lithium The use of lithium, an established treatment for bipolar and other mood disorders with symptoms of agitation, has been limited in dementia by its narrow therapeutic window, leading to adverse effects including toxicity, increased falls, and confusion in prior case reports and trials. There are currently no randomized controlled trials evaluating the use of lithium.
However, the Lithium Treatment for Agitation in Alzheimer’s disease (Lit-
AD) clinical trial, which uses low-dose lithium, has recently completed recruitment. Though study results are not yet published, this study will serve as the first randomized, double-blind, placebo-controlled trial to assess the efficacy of lithium for symptoms of agitation and aggression, with or without psychosis, in older adults with AD.
Cholinesterase inhibitors and memantine Cholinesterase inhibitors (AChI), such as donepezil, galantamine, and rivastigmine, and memantine, a noncompetitive N-methyl-aspartate (NMDA) antagonist, are symptomatic therapies for cognitive symptoms in Alzheimer dementia. They are approved for treatment of mild to moderate AD, while memantine is approved for treatment of moderate to severe AD. Evidence for their efficacy in treating NPS is limited. Meta-analyses of randomized controlled trials have found that donepezil, galantamine, and memantine are superior to placebo in reducing emergence of NPS, though the effects are modest compared to that of neuroleptics. Combination therapy of AChI and memantine, which is sometimes used in moderate to severe AD, has been shown in meta-analysis to have superior outcomes in reducing NPS compared to monotherapy and placebo. In general, AChI and memantine should not be considered first-line pharmacologic agents in the management of acute NPS of moderate or greater severity. Given their potential benefit in delaying cognitive symptom progression and modest improvement of NPS, however, they remain reasonable options for treating chronic NPS in dementia.
Both AChI and memantine are generally well tolerated. Common side effects of AChIs include vomiting, diarrhea, dyspepsia, anorexia, eight loss, dyspepsia, headache, dizziness, insomnia, and vagotonic effects leading to bradycardia and heart block. Common side effects of memantine include dizziness, headache, confusion, constipation, and fatigue.
Stimulants Modafinil, a wakefulness-promoting medication, and methylphenidate, a stimulant, have both been evaluated as possible treatments for apathy associated with AD. Modafinil is FDA-approved to treat narcolepsy, shift work sleep disorder, and obstructive sleep apnea. One small randomized controlled trial in people with mild to moderate AD and clinically significant apathy at baseline did not observe statistically significant benefit over placebo for improving apathy or caregiver burden.
Common side effects include headaches, nausea, diarrhea, anxiety, dyspepsia, and insomnia.
Methylphenidate is a dopamine reuptake inhibitor which is FDA- approved for treatment of attention deficit/hyperactivity disorder.
Methylphenidate has been associated with significant reductions in apathy symptoms in AD in a 6-week and 12-week randomized, double-blind, placebo-controlled trial. Though generally well tolerated, side effects of methylphedniate include cardiovascular effects, insomnia, headaches, and decreased appetite. Psychiatric side effects may include increased impulsivity, hallucinations, and affect lability. As a result, methylphenidate and other stimulants should be avoided in individuals with a history of schizophrenia, bipolar disease, or impulse control disorders.
Benzodiazepines Benzodiazepines are used in 8.5% to 20% of patients with AD, despite limited evidence for their efficacy in reducing agitation or improving sleep quality. A systematic review reported few randomized- controlled trials comparing benzodiazepines to other medications in managing NPS in dementia, and that of the few that are available none have found evidence benzodiazepines are more effective in reducing NPS than antipsychotics. They should be avoided due to their cognitive and deliriogenic effects increasing the likelihood of falls and fractures. The exception is use in emergency situations in which severely agitated patients are at risk of harming themselves or others, or in situations where patients have not responded to alternative pharmacologic interventions. If used, benzodiazepines should be utilized sparingly, with short-acting preparations preferred as they do not accumulate with repeated dosing compared to long- acting. Many benzodiazepines are metabolized through the liver, and benzodiazepines metabolized through glucuronidation (eg, oxazepam, temazepam, or lorazepam) are preferable in individuals with complex medical comorbidities. They have no active metabolites and are less susceptible to drug–drug interactions.
Melatonin Circadian rhythm disturbances such as sleep disturbances and sundowning are common in dementia. Due to its role in maintaining the circadian rhythm, melatonin has been of interest as a pharmacologic therapy for NPS in dementia. A 2020 Cochrane review of pharmacotherapies for sleep disturbance in dementia reported five randomized controlled trials that examined the use of melatonin for sleep disturbances in dementia and found that there was low-certainty evidence that melatonin doses up to 10 mg may have little or no effect on sleep efficacy, time awake after sleep onset, number of nighttime awakenings, or mean duration of sleep. Of studies that
have examined melatonin versus placebo in the context of agitation or sundowning behaviors, one RCT of 20 patients treated for 4 weeks with melatonin 3 mg reported benefits over placebo. One randomized controlled trial of ramelteon, a melatonin-receptor agonist, did not find any evidence of effect on sleep outcomes.
Newer medications Pimavanserin, an atypical antipsychotic with a novel mechanism of action as a selective inverse agonist at the serotonin receptor, was approved in 2016 by the US FDA for treatment of Parkinson disease psychosis. Although it has low risk of EPS due to its lack of dopamine blockade, it carries a similar side effect profile to other atypical antipsychotics, including QT prolongation and the black box warning for increased mortality in patients with dementia. There has been one phase 2, single-center study of nursing home residents using pimavanserin in the treatment of behavioral disturbances in AD. Participants treated with pimavanserin had reduced NPI scores at 6 weeks, but improvements were not sustained at 12 weeks compared to placebo.
Dextromethorphan-quinidine was approved by the US FDA in 2010 for the treatment of pseudobulbar affect. Dextromethorphan is a low-affinity, uncompetitive N-methyl-D-aspartate receptor antagonist, σ1 receptor agonist,
serotonin and norepinephrine reuptake inhibitor, and neuronal nicotinic α3β4 receptor antagonist. Off-label use in AD for agitation has been examined in one phase 2 randomized, multicenter, double-blind, placebo-controlled study,
which reported a significant reduction in symptoms, though significant
adverse events were observed in the treatment group, including falls, diarrhea, and urinary tract infection.
Evidence for the effectiveness and safety of psychoactive cannabinoids, such as dronabinol and tetrahydrocannabinol (THC), has been varied. While some studies have reported significant improvements of NPS, two recent systematic reviews found a high risk of bias in studies with considerable variability with respect to study design, and that higher quality trials did not find evidence of improvement in NPS. Most adverse drug events reported were mild, and the most common adverse drug event was sedation. Further large, randomized controlled trials are needed.
CONCLUSION
NPS are common throughout the range of dementia severity. They can be disabling to the patient, onerous to the caregiver, and at times dangerous. The differential diagnosis of NPSs in dementia is wide, including underlying medical causes, delirium, medication effect, and primary psychiatric disorders. These differentials should be considered prior to implementing a treatment plan. A multidisciplinary dementia care team can be invaluable to delivering effective, personalized care to patients with dementia.
Nonpharmacologic interventions should be considered first-line therapy, prior to implementing medications. Pharmacotherapy is appropriate when nonpharmacologic approaches have been unsuccessful, when symptoms are distressing or disruptive to the patient or caregiver, and in emergent situations. All psychotropic medications for behavioral symptoms are associated with adverse effects. Individuals with dementia and multiple medical comorbidities requiring many medications are at elevated risk of adverse effects and drug–drug interactions. When starting new medications, it is best to “start low and go slow,” use the lowest possible dose, and frequently reassess the risk/benefit ratio of the medication.
FURTHER READING
2019 American Geriatrics Society Beers Criteria® Update Expert Panel, Fick DM, Semla TP, et al. American Geriatrics Society 2019 updated AGS Beers Criteria® for potentially inappropriate medication use in older adults. J Am Geriatr Soc. 2019;67(4):674–694.
Black BS, Johnston D, Rabins PV, et al. Unmet needs of community-residing persons with dementia and their informal caregivers: Findings from the maximizing independence at home study. J Am Geriatr Soc.
2013;61(12):2087–2095.
Canevelli M, Adali N, Voisin T, et al. Behavioral and psychological subsyndromes in Alzheimer’s disease using the Neuropsychiatric Inventory. Int J Geriatr Psychiatry. 2013;28(8):795–803.
Cooper C, Mukadam N, Katona C, et al. Systematic review of the effectiveness of non-pharmacological interventions to improve quality of life of people with dementia. Int Psychogeriatr. 2012;24(6):856-870.
Gitlin LN, Kales HC, Lyketsos CG. Nonpharmacologic management of behavioral symptoms in dementia. JAMA. 2012;308(19):2020–2029.
Hane FT, Lee BY, Leonenko Z. Recent progress in Alzheimer’s disease research, part 1: pathology. J Alzheimers Dis. 2017;57(1):1–28.
Hane FT, Robinson M, Lee BY, et al. Recent progress in Alzheimer’s disease research, part 3: diagnosis and treatment. J Alzheimers Dis.
2017;57(3):645–665.
Kales HC, Gitlin LN, Lyketsos CG, Detroit Expert Panel on the Assessment and Management of the Neuropsychiatric Symptoms of Dementia.
Management of neuropsychiatric symptoms of dementia in clinical settings: recommendations from a multidisciplinary expert panel. J Am Geriatr Soc. 2014;62(4): 762–769.
Lanctôt KL, Amatniek J, Ancoli-Israel S, et al. Neuropsychiatric signs and symptoms of Alzheimer’s disease: new treatment paradigms. Alzheimers Dement (N Y). 2017; 3(3):440–449.
Mace NL, Rabins PV. The 36-Hour Day: A Family Guide to Caring for People Who Have Alzheimer Disease, Other Dementias, and Memory Loss. Baltimore, MD: JHU Press; 2017.
Porsteinsson AP, Drye LT, Pollock BG, et al. Effect of citalopram on agitation in Alzheimer disease: the CitAD randomized clinical trial. JAMA. 2014;19;311(7): 682–691.
Robinson M, Lee BY, Hane FT. Recent progress in Alzheimer’s disease research, part 2: genetics and epidemiology. J Alzheimers Dis.
2017;57(2):317–330.
Schneider LS, Dagerman KS, Insel P. Risk of death with atypical antipsychotic drug treatment for dementia: meta-analysis of randomized placebo-controlled trials. JAMA. 2005;294:1934–1943.
Seitz DP, Gill SS, Herrmann N, et al. Pharmacological treatments for neuropsychiatric symptoms of dementia in long-term care: a systematic review. Int Psychogeriatr. 2013;25(2):185–203.
Sink KM, Holden KF, Yaffe K. Pharmacological treatment of neuropsychiatric symptoms of dementia: a review of the evidence. JAMA. 2005;293:596–608.
Chapter
Parkinson Disease and Related Disorders
Vikas Kotagal, Nicolaas I. Bohnen
DEFINITION AND TERMINOLOGY
Parkinsonism is the unifying term that describes a constellation of motor and nonmotor neurologic features. Parkinsonism can be defined as a variable combination of six specific, independent motor features: bradykinesia (slowness of movement), tremor at rest, rigidity, loss of postural reflexes, flexed posture, and freezing of gait (where the feet are transiently “glued” to the ground). Of these features, bradykinesia—either affecting the arms or legs (“appendicular bradykinesia”) or midline structures including the trunk, head and neck, oropharynx, or eyes (“axial bradykinesia”)—is the most central element of parkinsonism and is caused by loss of dopaminergic neurons in a midbrain structure called the substantia nigra pars compacta (SNpc) responsible for innervating a group of critical motor nuclei within the deep portions of the brain collectively labeled the basal ganglia.
There are multiple causes of parkinsonism. The most common and extensively studied is idiopathic Parkinson disease (PD), which is estimated to affect approximately 1% to 2% of people older than age 60. PD is a complex disorder with a wide variety of clinical presentations whose exact pathogenesis is incompletely understood. The eponymous name “Parkinson disease” was coined following the publication of “An Essay on the Shaking Palsy” by the British surgeon James Parkinson in 1817. In more recent years, the term “Parkinson disease” has been favored over “Parkinson’s disease” given that Dr. Parkinson neither personally contracted nor “owned” the disease that over time has been associated with his surname.
There are numerous causes of parkinsonism, almost all of which becoming increasingly common with advancing age. These include (a)
secondary parkinsonism caused by toxins, medications, or structural lesions in the brain; (b) atypical parkinsonian conditions including progressive supranuclear palsy (PSP), multiple system atrophy (MSA), corticobasal syndrome (CBS), and dementia with Lewy bodies (DLB); and (c) more rare neurodegenerative conditions with heterogeneous manifestations that can include parkinsonism such as juvenile Huntington disease, spinocerebellar ataxia type 3, and Wilson disease (Table 61-1).
TABLE 61-1 ■ CLASSIFICATION OF THE PARKINSONIAN STATES
Learning Objectives
Learn the epidemiology, pathobiology, clinical manifestations, and genetics of Parkinson disease (PD).
Understand the latest terminology and major clinical differences between parkinsonism and PD.
Learn the common presenting features of diseases, such as progressive supranuclear palsy
(PSP), corticobasal degeneration (CBD), and MSA that mimic and require differentiation from
PD.
Acquire new knowledge about the latest tests to diagnose PD and indications and adverse effects of cutting-edge therapies, including dopaminergic and nondopaminergic agents and surgical treatments for PD.
Understand the significance of exercise, physical activity, and supportive care for management of patients with PD.
Key Clinical Points
Parkinsonism includes a constellation of motor and nonmotor features. Parkinson disease (PD) is he most common neurodegenerative cause of parkinsonism. Other causes include medications, structural lesions of the brain, and diseases that present with extrapyramidal manifestations
The gold standard of making a diagnosis of PD remains an autopsy that shows Lewy bodies in substantia nigra.
PD affects more than 1 out of every 100 individuals older than age 60 and is more common in men.
Patients with PD almost always respond to dopaminergic medications, while those with parkinsonism generally do not. The symptoms most responsive to treatment include bradykinesia and rigidity.
Besides symptoms, dopamine transporter single-photon emission computed tomography (SPECT) imaging helps to diagnose PD and differentiate it from essential tremor.
Deep brain stimulation (DBS) surgery is typically indicated for patients with difficult motor complications and medication- refractory tremors.
It should be noted that there are a variety of overlapping and often outdated names frequently used to describe certain parkinsonian conditions
that can be confusing to the nonspecialist. For example, the term olivopontocerebellar atrophy was formerly used to describe a collection of neurodegenerative conditions including MSA and some other progressive neurodegenerative cerebellar disorders. Similarly, the terms Lewy body dementia (LBD) and DLB are used interchangeably to refer to the same disorder. Finally, the term “diffuse Lewy body disease” is also used loosely by both clinicians and pathologists to describe the topographical distribution of postmortem findings that can be seen in DLB.
EPIDEMIOLOGY
Like other insidiously developing progressive disorders of aging, there are inherent challenges in identifying the true prevalence of PD. The diagnosis of PD is typically made on the basis of clinical examination. There are numerous adjunctive clinical diagnostic measures, including a documented favorable response to a trial of dopaminergic medications, that can enhance certainty in the diagnosis of PD but these may not be used in large epidemiologic studies. The gold standard for making the diagnosis of PD remains autopsy where characteristic intracellular cytoplasmic inclusions of α-synuclein called Lewy bodies are seen in the SNpc and other brain and nervous system regions. Longitudinal clinical postmortem studies suggest that 10% to 20% of patients thought to have PD in life will have alternative diagnoses on autopsy. Interestingly, midbrain Lewy bodies are also seen on autopsy in about 20% of older individuals without a known history of parkinsonism suggesting that PD may be either underdiagnosed or not yet have manifested the typical motor features of the disease during life.
Alternatively, these individuals may have so-called prodromal DLB.
Many epidemiologic studies of PD identify cases through medical records rather than through door-to-door examinations of well-defined populations. Incidence rates of PD vary not only by age but also by gender. Estimates across all possible ages and genders tend to range from 4.5 to 19/100,000 person-years reflecting differences in ascertainment methods and biological susceptibility. Among individuals older than age 60, incidence rates range from 27.2 to 107.2 cases/100,000 person-years. The median age of onset is in the early sixties and there is increasing risk for PD seen with each successive decade of life.
Since most people with PD live many years before death, prevalence rates of PD are higher than the incidence rates. Across all age ranges and
genders, PD is thought to affect between 100 and 200 out of every 100,000 people. For people older than age 60, PD is thought to affect slightly more than 1 out of every 100 individuals. Parkinson disease is about 1.5 times more common in men than in women for reasons that may reflect differences in underlying biological susceptibility.
In most cases, PD is not a direct cause of death and frequently goes unmentioned on death certificates. Death in individuals with PD is occasionally due to secondary acute comorbidities seen in patients with mobility restrictions including aspiration pneumonia, traumatic falls, deep vein thrombosis, and pulmonary embolus. Mortality rates in PD are slightly higher in comparison to age- and gender-matched populations, although differences in life expectancy are most profound in individuals with early- onset PD. Individuals with PD and dementia also have a higher risk of death. PD patients who follow with a neurologist have a lower likelihood of death compared to those whose PD is managed exclusively by primary care physicians.
PD is caused by a complex interaction between genetic and environmental risk factors. Nevertheless, to date, relatively few environmental risk factors for sporadic PD have been identified. Exposure to pesticides containing the pyridine compound 1-methyl-4-phenyl-1,2,3,6- tetrahydropyradine (MPTP), industrial solvents, and heavy metals including manganese have all been suggested to associate with a higher risk of incident PD. Higher incidence in rural areas has been associated with farming-related exposures to pesticides and/or drinking from well water. There are several factors that have been consistently demonstrated to have an inverse relationship with PD risk including cigarette smoking, caffeinated coffee consumption, high plasma levels of uric acid, and endogenous estrogen exposure. The biological principles that mediate these associations are incompletely understood.
Although over 90% of cases of PD are considered sporadic, there is a growing understanding of the influence of primary monogenetic genetic causes of PD. Mendelian inheritance is implicated in several PD risk factor genes, all of which share a common naming schema as PARK genetic loci. To date, there are 20 PARK genes (Table 61-2). The most notable of these include PARK1 or α-synuclein mutations, which were the first genetic variant identified to cause familial parkinsonism. There are now several identified autosomal-dominant and autosomal-recessive causes of PD some of which
show variable penetrance depending on the underlying mutation. More recently, common haplotypes of genes known to play a role in other neurodegenerative conditions have been recognized as incremental risk factors for PD. These include the MAPT gene encoding the tau protein implicated in neurofibrillary tangles of Alzheimer disease (AD). Allelic variations including point mutations and deletions in the GBA gene, whose loss of function is implicated in causing impaired lysosomal activity in Gaucher’s disease, have also been linked to cases of early-onset PD, particularly among individuals of Ashkenazi Jewish ancestry.
TABLE 61-2 ■ GENETIC FORMS OF PRIMARY PARKINSONISM
Understanding the pathobiology of these genetic PD subtypes has advanced the field’s understanding of causative factors leading to the majority of “idiopathic” PD as well. PD is now thought to occur because of the confluence of several interrelated neurobiological risk states that each
grow more problematic in the setting of aging. These include (1) autophagy- lysosomal dysfunction (ALD), (2) oxidative stress and dopamine toxicity, (3) selective neuronal vulnerability, and (4) network frailty and the prion-like spread of toxic alpha-synculein pathology. First, ALD is implicated in PD with GBA1 and LRRK2 mutations and may play a bidirectional causative role in propagating toxic alpha-synuclein aggregation, which in turn may deleteriously affect autophagy-lysosomal function. Second, the generation of toxic oxidative species by neuronal dopamine metabolism may impair mitochondrial function, which may in turn lead to secondary ALD related to the turnover of dysfunctional mitochondria. Third, ALD and oxidative stressors may make certain projection neurons with unusually large axonal arborization characteristics particularly vulnerable to the effects of inefficient cellular metabolism. Certain neuronal groups affected by alpha- synuclein pathology show extensively branched axonal trees, perhaps making them particularly vulnerable. In some cases, this vulnerability can be accelerated by local neuroinflammation at the site of nascent neurodegeneration accelerated by adjacent glial cell populations. Finally, toxic alpha-synuclein appears to spread from one cell to the next in “prion- like” fashion where a misfolded peptide aggregate can seed a connected neuron in its functional network, thereby causing it to develop alpha- synuclein pathology as well. These findings have collectively set the stage for new disease-modifying approaches currently being tested in PD clinical trials.
Atypical parkinsonian conditions including PSP, MSA, and CBS manifest with a prevalence rate that is roughly 5% to 10% of PD prevalence. DLB is more common than the other atypical parkinsonisms, although prevalence estimates of DLB are confounded by its significant clinical and pathologic overlap with other common neurodegenerative conditions including PD and AD. Risks for developing PSP and CBS have been linked to allelic variation in the MAPT gene, and mutations in the GBA gene are seen with increased frequency in both PD and DLB.
PATHOPHYSIOLOGY
PD is most strongly associated with two specific postmortem hallmarks: (a) the development of cytoplasmic, eosinophilic inclusions of the misfolded synaptic protein α-synuclein called Lewy bodies and (b) the loss of SNpc dopaminergic neurons innervating the striatum.
Nigrostriatal dopaminergic denervation is seen as part of the spectrum of normal aging, albeit to a milder degree than is seen in PD. It is estimated that with every year of life after the third decade, we lose 0.5% to 1% of nigrostriatal nerve terminals, including the dorsal putamen a structure charged with mediating speed and precision of motor movements. By the time motor symptoms of PD are present, however, the posterior putamen has undergone profound denervation equating to the loss of over 60% to 80% of dopaminergic terminals. Although in this way, PD could be conceptualized as an accelerated form of nigrostriatal aging, the nerve terminal loss in PD has a particular predilection for the posterior and dorsal putamen compared to more anterior and ventral striatal regions. In contrast, normal aging is associated with more mild and diffuse striatal losses of nerve terminals.
Within the striatum, dopaminergic innervation is organized into two broad conceptual pathways: the direct pathway and the indirect pathway. Although this model is continually updated, it has formed the scientific basis accounting for a number of advances in neurobiology including the development of DBS. In this conceptual model, the striatum is viewed as a series of interconnected relay nuclei that upregulate or downregulate inputs from both the cortex and deep nuclei of the brain stem and spinal cord, yielding a well-regulated motor output to the cortex, thalamus, and brainstem pedunculopontine nucleus, which is part of the mesencephalic locomotor center.
The “direct pathway” is a monosynaptic, D1 receptor–mediated excitatory connection between the striatum and the globus pallidus pars interna (GPi) and substantia nigra pars reticulata (SNpr). When stimulated, these latter two nuclei lead to an increase in motor output via thalamocortical afferents. The “indirect pathway” is a polysynaptic pathway that depends on inhibitory D2 receptors affecting the globus pallidus pars externa (GPe) and the subthalamic nucleus (STN), which then innervate the GPi/SNpr. The net effect of the indirect pathway is to suppress motor output via thalamocortical afferents. Like many models of complex biological phenomena, the direct and indirect pathway model is an oversimplification of more nuanced network and has been challenged and revised over time. It nevertheless provides a useful conceptual framework to think about the pathologic changes in PD. Since D1 receptors are excitatory and D2 receptors are inhibitory, loss of dopamine in PD has the net effect of reducing transmission through the direct pathway (less stimulation of voluntary motor activity) and increasing
transmission via the indirect pathway (more inhibition of motor activity) leading to an output that is characterized by a paucity of movement.
Significant advances in neurobiology over the last 20 years have improved our understanding of cellular pathology in PD. Abnormal processing of misfolded α-synuclein is now thought to occupy a central role in PD pathogenesis. α-Synuclein itself is an endogenously produced neuronal protein involved in synaptic vesicle trafficking. The breadth of mendelian genetic mutations linked to inherited forms of PD has given rise to several theories about the pathogenesis of sporadic PD, many of which coexist and contribute in additive fashion toward cell death in at-risk neuronal populations including the SNpc. These possibilities include (1) damage to the protein degradation properties of lysosomes leading to α-synuclein accumulation and aggregation; (2) effects from oxidative stress, such as the reaction of oxyradicals with nitric oxide; (3) impaired mitochondrial function leading to both reduced ATP production and accumulation of electrons that aggravate oxidative stress, with the final outcome being apoptosis and cell death; and (4) inflammatory changes in the nigra producing cytokines that augment apoptosis.
A description of contemporary models of PD pathogenesis would be incomplete without discussing the developing hypothesis that pathogenesis and disease progression of PD may be mediated through a prion-like cell-to- cell spread of misfolded α-synuclein. Prion proteins whose unusual morphology allows the induction of pathologic changes in adjacent cells by promoting misfolding of endogenous proteins, thereby transmitting cell death to adjacent neurons in an infectious-like fashion. In vitro and in vivo experiments in preclinical models of PD have shown the ability of misfolded α-synuclein to induce similar changes in adjacent neurons leading to neurodegeneration.
The “Braak model” of PD pathogenesis is a temporal and topographic schema of Lewy body or Lewy neurite deposition based on findings in a cohort of older individuals with Lewy bodies found on autopsy. In the originally proposed model, Lewy body formation begins in the medulla oblongata and then progresses in a rostral fashion to involve the upper brain stem followed by the diencephalon and cortex. Coincidentally, with the brainstem medulla deposition, this model posits also early deposition in the olfactory tubercle, which then may progress to adjacent regions. Lewy body deposition in the original Braak stage 3 (of total six stages) involves the
nigra and is thought to correspond with the onset of motor features of PD. Braak stages 1 and 2 correspond with premotor features of PD, including olfactory, sleep, and autonomic symptoms that can often predate the diagnosis of PD by several years. Similarly, cortical Lewy body formation seen in Braak stages 5 and 6 correspond with cognitive impairment and dementia seen later in the disease course in PD. More recent revisions of this model suggest even earlier involvement of Lewy body deposition in peripheral autonomic nerve terminals, including the intestines, stomach, and myocardium antedating the motor symptoms of PD decades before.
Unlike PD, both PSP and CBS are not thought to be disorders of α- synuclein but instead are attributable to misfolded tau. The parkinsonism seen in these disorders is attributable to tau-based neurodegeneration of the SNpc and basal ganglia. MSA is characterized by the development of argyrophilic cytoplasmic inclusion bodies that are positive for α-synuclein in glial cells rather than typical neuronal Lewy bodies. DLB has neuropathological overlap with PD with dementia (PDD) in that both disorders are characterized by Lewy body formation in the cortex. PDD is clinically defined by the so-called 1-year rule where motor symptoms antedate cognitive symptoms for over 1 year, whereas motor and cognitive symptoms coincide within a year in DLB. Although different in the temporal profile of the emergence of cognitive and neurobehavioral symptoms, typical features of DLB and/or PDD include hallucinations, especially visual, fluctuations in cognition and dream enactment behavior. DLB, however, also features significant cerebral amyloidopathy more characteristic of AD. Although DLB is often characterized as representing an overlap between PD and AD neuropathology, neurofibrillary tangles are less common in DLB compared to prototypical AD.
PRESENTATION AND EVALUATION
The diagnosis of PD is of prognostic importance, as well as of therapeutic significance, because PD almost always responds somewhat to dopaminergic medications whereas the atypical parkinsonian conditions often do not. In general, the features of PD that tend to improve the most with levodopa or dopamine agonists include bradykinesia, especially fine motor distal motor dexterity, and rigidity in the arms and legs. In some individuals, features of rest tremor can improve significantly with levodopa, especially in the presence of more prominent bradykinesia. Axial motor features including
hypophonia, dysphagia, and postural instability are often medication refractory and signify the influence of nondopaminergic changes superimposed on top of nigrostriatal dopaminergic denervation.
While it may be difficult to distinguish between PD and Parkinson-plus syndromes in the early stages of the illness, with disease progression over time, the clinical distinctions of the Parkinson-plus disorders become more apparent with the development of other neurologic findings, such as loss of downward ocular movements in PSP or cerebellar ataxia and autonomic dysfunction (eg, postural hypotension, loss of bladder control, and impotence), which can be seen to a mild-moderate degree in PD but are often very prominent in MSA.
PD begins insidiously and gradually progresses. Three of the most helpful clues that one is likely dealing with an alternative cause of parkinsonism other than idiopathic PD are (1) a symmetrical onset of symptoms (PD often begins asymmetrically on one side of the body), (2) a lack of a substantial clinical response to adequate levodopa therapy, and (3) the absence of rest tremor—though this latter feature is less specific and is present to variable degrees in PD. There are three commonly cited clinical criteria for diagnosing PD: the UK Brain Bank Clinical Diagnostic criteria, the Gelb criteria, and the 2015 Movement Disorders Society (MDS) Criteria. Each criterion emphasizes that bradykinesia, the most prevalent motor feature of PD, must be present and that other causes of parkinsonism should preferably be excluded. The UK Brain Bank criteria use the presence of postural instability as an inclusion criterion for PD, whereas the Gelb criteria deemphasize this feature—given that it can also be seen in atypical parkinsonian conditions—and suggest that asymmetry of motor presentation should be the key inclusion criteria for PD. The MDS criteria for “clinically established” PD require, the absence of exclusionary criteria, the presence of bradykinesia, and at least 2 of 4 supportive criteria: (1) a levodopa treatment response, (2) levodopa-induced dyskinesias, (3) rest tremor, and (4) either olfactory loss and/or cardiac sympathetic denervation. Clinical features suggesting an alternative parkinsonian diagnosis, so-called “red flags,” are listed in Table 61-3 and a comparison of diagnostic features for PD and atypical parkinsonian conditions is presented in Table 61-4. One common misdiagnosis is tremor due to essential tremor, which can even be unilateral, although it more commonly is bilateral. Helpful in the diagnosis is that the tremor caused by PD is a predominant rest tremor, whereas essential tremor
is not typically present at rest, but appears with holding the arms in front of the body (postural tremor) and increases in amplitude with activity of the arm (kinetic or action tremor), such as with handwriting or performing the finger- to-nose maneuver. The presence of mixed and asymmetric tremor syndromes can be particularly challenging as sometimes PD and essential tremor may coexist in the same patients.
TABLE 61-3 ■ CRITERIA TO EXCLUDE THE DIAGNOSIS OF PARKINSON DISEASE IN FAVOR OF ANOTHER CAUSE OF PARKINSONISM
TABLE 61-4 ■ COMPARISON OF DIAGNOSTIC FEATURES FOR PD AND OTHER ATYPICAL PARKINSONIAN CONDITIONS
Although the diagnosis of PD rests largely on the clinical history and examination, there are adjunctive diagnostic measures that can be useful in making the proper diagnosis. A history of a positive response to levodopa or other dopaminergic medications, for example, is seen in almost all patients with PD. That having been said, many patients with atypical parkinsonian conditions—in particular MSA and DLB—will also describe a generally
milder improvement in certain motor features from dopaminergic medications. Dopamine transporter imaging through SPECT (I-123 ioflupane SPECT or DaTscan) scan provides molecular imaging evidence to confirm the loss of nigrostriatal dopaminergic nerve terminals, consistent with parkinsonism (Figure 61-1). It cannot, however, differentiate between PD and other causes of neurodegenerative parkinsonism, such as DLB, PSP, or MSA; all of these conditions will have evidence of nigrostriatal losses on dopamine transporter imaging. Dopamine transporter SPECT imaging has been approved to distinguish essential tremor from PD in patients with atypical tremor manifestation. This imaging modality can also be helpful to distinguish PD from drug-induced parkinsonism. Impaired performance on olfactory identification testing has also shown good correlation with postmortem findings of PD in patients presenting for suspected parkinsonism but can also be seen in other neurodegenerative conditions, such as AD or DLB. Scratch-and-sniff tests for odor identification are available commercially and can be completed and scored in the clinic relatively quickly.
FIGURE 61-1. A. Normal dopamine transporter imaging through single-photon emission computed tomography (DaT SPECT scan) in a patient with essential tremor. B. Asymmetric loss of putaminal dopaminergic nerve terminals affecting the right striatum more than the left consistent with a primary neurodegenerative parkinsonian condition.
Updated diagnostic criteria for DLB were published in 2017. Individuals meet the threshold for “probable DLB” if they show findings of a progressive dementia and have at least two of core features of DLB: (1) parkinsonism,
(2) cognitive fluctuations, (3) rapid eye movement (REM) sleep behavior disorder (RBD), and (4) recurrent visual hallucinations. A positive biomarker test—including an abnormal striatal DaT SPECT, polysomnography findings consistent with RBD, or sympathetic denervation on myocardial scintigraphy—can also be substituted for one of these four core criteria in order to meet the “probable DLB” diagnostic threshold. The US Food and Drug Administration (FDA) has approved PET radiopharmaceuticals to detect the presence of amyloid plaques (eg, florbetapir) and neurofibrillary tangles (flortaucipir) and may be useful for diagnostic purposes if diagnostic uncertainty remains. The FDA is expected to review applications shortly for diagnostic amyloid-beta blood tests that would allow the detection of cerebral amyloid disorders, including AD, in relevant clinical contexts. Whether these tests may be employed to distinguish different prognostic trajectories in individuals with common early synucleinopathies is a topic worth monitoring—this is especially true given that manifest DLB is characterized by higher cortical amyloid-beta plaque levels than idiopathic PD.
A recent proposal to distinguish the so-called body-first versus brain- first subtypes of PD may blur the line between PD and DLB diagnoses. The proposed subtyping is based on the temporal relationship between the onset of RBD and motor diagnosis of PD. A gut-first subtype would be defined when RBD precedes the motor diagnosis with at least 1 year compared to post-motor symptom emergence of this parasomnia or its absence. This subtyping may have an advantage of early stage prognostication but its utility in more advanced disease is questionable.
Although there are many motor features seen in early PD, the scope of nonmotor features associated with early PD is even larger. These can include (but are not limited to) olfactory impairment, sleep difficulties, depression, anxiety, chronic constipation, limb pain, apathy, erectile dysfunction, cognitive impairment, drooling, rhinorrhea, and other autonomic features.
Although some of these features can be a secondary development due to disability accrued from PD, almost all of them are aggravated by primary Lewy body–related neuronal changes seen in various areas of the nervous
system ranging from the enteric nerves of the gastrointestinal tract to the cholinergic nerves arising from the basal forebrain, which innervate the cerebral cortex. Although many of these features become increasingly common with age, the concomitant development of three or more of these features in an older individual without a clear alternate explanation should prompt a work-up for a parkinsonian condition.
Early cognitive changes seen in PD include difficulty with memory, attention, and executive dysfunction, the latter of which refers to planning, multitasking, and decision-making capacity. In some patients, these cognitive features may be stable for many years, whereas in others they may progress to dementia. It is estimated that about 50% of individuals with PD will develop dementia (PDD) within 10 years of their initial diagnosis.
MSA is a parkinsonian disorder characterized by aggressive α- synuclein–related cellular loss in the brain stem, cerebellum, and basal ganglia. The median age of onset is in the sixth decade of life with a mean time to severe disability of approximately 5 years from the time of diagnosis. Progressive cell death occurs not only in dopaminergic cells but in many different neuronal systems including the basal ganglia, substantia nigra, locus coeruleus, pontine nuclei, cerebellar Purkinje cells, and the intermediolateral cell column of the spinal cord. MSA is typically grouped into two clinical categories: MSA-P, where parkinsonism is seen, and MSA-C, where cerebellar features including ataxic gait, dystaxic limb movements, and ataxic speech predominate. Other characteristic features of both MSA-P and MSA-C that differ from PD include severe autonomic symptoms including orthostatic light-headedness, blood pressure fluctuations, and urinary dysfunction. Patients with MSA can also experience nocturnal stridor characterized by laryngeal obstruction secondary to vocal fold hypokinesia, which manifests with a high-pitched inspiratory noise. Unlike obstructive sleep apnea, nocturnal stridor can be acutely life threatening. While emergency tracheostomies have been performed in MSA for life-threatening stridor, this may be inconsistent with overall goals of care for such patients. Following with an otolaryngologist on an annual basis can be an effective method for monitoring this symptom overtime and some patients may benefit from nasal continuous positive airway pressure (CPAP). Unlike other atypical parkinsonian conditions, cognitive impairment is not a typical feature of MSA but may be present in a small subset.
PSP and CBS are both tauopathies associated with parkinsonism. PSP tends to present in the sixth and seventh decades of life with early oculomotor findings including downgaze impairment, axial rigidity, and postural instability. Patients with PSP also develop a frontal-predominant cognitive syndrome characterized by apathy, executive dysfunction, and pseudobulbar effect. CBS is the clinical diagnosis given for patients with suspected CBD, the latter of which refers specifically to neuropathologic findings. Clinically defined CBS is associated with several distinct postmortem histopathologies, including CBD, other forms of tau-related frontotemporal lobar degeneration, such as PSP, and AD. Motor manifestations of CBS also tend to occur in the sixth or seventh decade of life and are characterized by progressive asymmetric rigidity, myoclonus, parkinsonism, and an unusual motor phenomenon termed “alien limb” during which patients experience adventitious semipurposeful movements of one limb. CBS is one of the exceptions to the general rule that asymmetric limb motor involvement favors a diagnosis of PD. Early cognitive changes include praxis difficulties and language impairment. Cortical sensory loss can also be seen. The variable presence of these clinical features and their overlap with features seen in PSP, PD, and frontotemporal dementia (FTD) has led clinicians to use the term CBS to describe clinically probable, though not pathologically confirmed, CBD.
MANAGEMENT
Treatment of patients with PD can be divided into three major categories: medications, physical (and mental health) therapy, and surgery. Although the pharmacologic strategies described below apply primarily to PD, they can also be tried in atypical parkinsonian conditions. MSA, PSP, and CBD, however, are typically associated with a more limited response to medications, underscoring their overall worse prognosis.
Dopaminergic Therapies
Dopamine replacement therapy is the primary medical approach to treating PD, and a variety of dopaminergic agents are available (Table 61-5). The most powerful oral drug is levodopa, the immediate precursor of dopamine. Levodopa, an amino acid precursor molecule of dopamine, can enter the brain, whereas dopamine is blocked by the blood-brain barrier. Levodopa is usually administered combined with a peripheral decarboxylase inhibitor
(carbidopa or benserazide) to prevent formation of dopamine in the peripheral tissues, thereby increasing levodopa’s bioavailability and also markedly reducing gastrointestinal side effects. The brand name Sinemet is a combination of carbidopa and levodopa; the brand name Madopar is a combination of benserazide and levodopa. Such combination drugs are available in standard (ie, immediate-release) and extended-release formulations. The former allows a more rapid and predictable “on,” and the latter allows for a slightly longer plasma half-life, but with a slower and less predictable “on.” The combination of the two release formulations can be administered in an attempt to smooth out and extend plasma levels of levodopa. A version of carbidopa/levodopa that dissolves under the tongue (Parcopa) and enters the stomach via swallowing saliva is also available.
This orally dissolving formulation has particular usefulness for patients who have swallowing difficulties.
TABLE 61-5 ■ DOPAMINERGIC AGENTS
Although levodopa is the most effective drug to treat the symptoms of PD, over half of patients develop troublesome complications of disabling response fluctuations (“wearing-off”) and/or dyskinesias after 5 years of levodopa therapy. Besides being metabolized by aromatic amino acid decarboxylase (commonly known as dopa decarboxylase), levodopa is also metabolized by catechol-O-methyltransferase (COMT) to form 3-O-
methyldopa. Entacapone is a currently available COMT inhibitor. This agent extends the plasma half-life of levodopa with and also increases its peak plasma concentration, and thereby prolongs the duration of action of each dose of levodopa. Its clinical indication is to help reduce motor fluctuations, that is, increase “on” time and reduce “off” time. Because entacapone enhances levodopa’s efficacy, it can increase dyskinesias and the dosage of levodopa may need to be lowered. Entacapone is very short acting, and each 200-mg tablet is taken simultaneously with levodopa. Entacapone is also available in a combination pill with carbidopa/levodopa (Stalevo).
Tolcapone (100- and 200-mg tablets) is more potent and has a longer duration of action, but it is encumbered by a greater risk of diarrhea and hepatotoxicity, the latter of which has led to its removal from the market in the United States. After levodopa, the next most powerful oral drugs in treating PD symptoms are the dopamine agonists. Several of these are available. The ergot compounds of pergolide, bromocriptine, and cabergoline have the potential to induce fibrosis (cardiac valvulopathy and retroperitoneal, pleuropulmonary, and pericardial fibrosis), so these agents are not recommended; indeed, pergolide has been withdrawn from the US market. Pramipexole and ropinirole appear to be equally effective at therapeutic levels. Dopamine agonists are more likely than levodopa to cause hallucinations, confusion, and psychosis, especially in the older adults. Thus, it is safer to utilize levodopa in patients older than 70 years. On the other hand, clinical trials have shown that dopamine agonists are less likely to produce dyskinesias and the wearing-off phenomenon than levodopa. These differences are most likely due to the relatively lower potency/efficacy and longer half-life of dopamine agonists compared to levodopa. Slow-release preparations of ropinirole and pramipexole are also available. Other problems more likely to occur with dopamine agonists than levodopa are sudden sleep attacks, including falling asleep at the wheel, daytime drowsiness, ankle edema, and impulse control problems such as hypersexuality and compulsive gambling, shopping, and binge eating. The newest dopamine agonist is rotigotine which is applied via a dermal patch to the upper torso or arms. It is useful for those with swallowing difficulties and may help smooth out motor fluctuations and nocturnal akinesia when the last prebedtime dose of levodopa does not last throughout the night.
Rotigotine is a high-potency agonist at human dopamine D1, D2, and D3 receptors with a lower potency at D4 and D5 receptors. Therefore, rotigotine
differs from conventional dopamine D2 agonists used in the treatment of PD, such as ropinirole and pramipexole, which lack activity at the D1 and D5 receptors, but resembles that of apomorphine, which has greater efficacy in PD than other dopamine agonists. The preferential D1 receptor agonism may explain rotigotine’s higher efficacy in treating freezing of gait in PD compared to pramipexole and ropinirole. Another advantage of rotigotine is the transdermal delivery facilitating more steady drug delivery throughout the day.
Apomorphine may be the most powerful dopamine agonist, but can cause intense nausea and historically has needed to be injected subcutaneously. It is used to provide faster relief to overcome a disabling “off” state. A newly developed formulation of Apomorphine is now available in a sublingual film (Kynmobi) and may be useful for treating patients with precipitous motor fluctuations (see Treatment of “Wearing Off”).
Amantadine is adjunctive antiparkinsonian drug with several pharmacologic actions; it has mild antimuscarinic effects, but more importantly, it can activate release of dopamine from nerve terminals, block dopamine uptake into the nerve terminals, and block glutamate N-methyl-D- aspartate (NMDA) receptors. Its dopaminergic actions make it a useful drug to relieve symptoms in approximately two-thirds of patients, but it can induce livedo reticularis, ankle edema, visual hallucinations, and confusion. Its antiglutamatergic action is useful in reducing the severity of levodopa- induced dyskinesias, and in fact, is the only established effective antidyskinetic agent. The dose of amantadine for its anti-PD effect is usually 100 mg twice daily, but its antidyskinetic effect requires higher dosages, usually 300 to 400 mg/day. Unfortunately, the antidyskinetic effect tends to lessen over time. Older individuals often do not tolerate amantadine well because of mental adverse effects of confusion and hallucinations.
Domperidone is a peripherally active dopamine receptor blocker and is useful in preventing gastrointestinal upset from levodopa and the dopamine agonists. It is not available in the United States but is available in other countries including Canada. Monoamine oxidase type B (MAO-B) inhibitors (selegiline, rasagiline) offer mildly effective symptomatic benefit and are without significant hypertensive diet-linked side effects seen with MAO-A inhibitors, and therefore can be used in the presence of levodopa therapy.
Although there has been considerable debate about possible protective or disease-modifying benefit of MAO-B inhibitors, numerous well-powered
trials have failed to convincingly demonstrate a neuroprotective benefit. The results of these trials have been interpreted variably, however, given that selegiline and rasagiline appear to have a symptomatic benefit, which has contributed to some methodological concerns regarding specific trial designs. Selegiline, but not rasagiline, is metabolized to L-amphetamine and methamphetamine. Both of these drugs can reduce the severity of motor fluctuations with levodopa. The newly developed drug safinamide is thought to offer greater specificity for MAO-B, which in theory may reduce off target side effects related to diet.
Motor Complications of Dopaminergic Therapies
Many patients on levodopa therapy develop motor complications (Table 61- 6). These motor complications, also referred to as “motor fluctuations,” usually begin as mild wearing-off, which can be defined as when an adequate dose of levodopa does not last at least 6 hours and motor symptoms of bradykinesia, rigidity, or tremor emerge or worsen. Typically, in the first couple of years of treatment, there is a long-duration response so that the timing of doses of levodopa is not important. Over time, the long-duration response becomes lost, and only a short-duration response occurs; patients then develop the wearing-off phenomenon. The “off” episodes tend to be mild at first, but over time become more frequent or severe with more severe parkinsonism. Simultaneously, the duration of the “on” response becomes shorter. Eventually, some patients develop random, sudden “offs” in which the deep state of parkinsonism develops over minutes rather than tens of minutes, and they are less predictable in terms of synchrony with the dosing of levodopa. Many patients who develop response fluctuations also develop abnormal involuntary movements, that is, dyskinesias.
TABLE 61-6 ■ PATTERN OF DEVELOPMENT OF RESPONSE FLUCTUATIONS, DYSKINESIAS, AND OTHER COMPLICATIONS
Treatment of “wearing-off”
The wearing-off phenomenon, when mild, may be ameliorated slightly with the addition of selegiline, rasagiline, or safinamide. Each MAO-B inhibitor potentiates the action of levodopa. A higher dose of levodopa may be necessary but more frequent dosing of levodopa may be the simplest
approach to manage this motor complication. Many patients can require six or more doses of levodopa per day, and then, eventually, can develop dose failures owing to poor gastric emptying. These patients are often considered for duodenal infusion of levodopa or DBS (see Surgical Therapy later).
Continuous-release carbidopa/levodopa (Sinemet CR) can also be effective in patients with mild wearing-off in some patients or use the combination of both immediate- and extended-release formulations. Newer formulations of carbidopa/levodopa have attempted to address the clinical need for more uniform levodopa dosing throughout the day. These include Rytary—a capsule that contains immediate and extended release levodopa with carbidopa. Rytary and Sinemet make use of slightly different doses that may merit close monitoring when transition from one drug to the other.
Optimally, Rytary may be a useful Sinemet substitution in patients who depend on levodopa administration every 2 to 4 hours throughout the day, hopefully allowing for less frequent dosing. A short-acting formulation of levodopa delivered in an inhaler (Inbrija) may be useful in the setting of precipitous motor “offs.” It may start working within 10 to 30 minutes though
—much like inhalers for pulmonary conditions—its efficacy can be altered by the technique of inhaler utilization. Dopamine agonists, which have a longer biological half-life than levodopa, can also be used in combination with immediate-release or continuous-release versions of carbidopa/levodopa. The addition of a dopamine agonist tends to make the “off” state less severe when used in combination with carbidopa/levodopa. COMT inhibitors have been found useful for treating wearing-off. Because of the short half-life of entacapone, it is given with each dose of carbidopa/levodopa and is about as equally effective as rasagiline in reducing the amount of daily “off” time. For those patients who have “offs” at a specific time of day, entacapone can be strategically given just with the dosage of carbidopa/levodopa that precedes this “off” period. Once daily dosing is now possible with a newly developed COMT inhibitor, opicapone. Adenosine A2A receptors are expressed in the striatum and a newly approved A2A-receptor antagonist (istradefylline) may serve to reduce activity of the basal ganglia’s indirect pathway, thereby relieving parkinsonian hypokinesia. Istradefylline is a once-daily medication used to reduce off-time when used in conjunction with carbidopa/levodopa in PD patients with motor fluctuations.
Behavioral or sensory “offs” can also occur as do motor “offs,” often in the absence of any motor “off,” which means a return of parkinsonism.
Behavioral and sensory “offs,” tend not to be easily recognized, because visibly the treating physician sees no motor changes. Behavioral/sensory “offs” can consist of pain, akathisia, depression, anxiety, dysphoria, or panic, and usually a combination of more than one of these. Sensory “offs,” like dystonic “offs,” are very disabling. It is often the presence of one of these sensory and behavioral phenomena, more so than motoric parkinsonian or dystonic “offs,” that drives the patient to take more and more levodopa, leading a few patients to develop an addictive relationship with dopaminergic medications, so-called dopamine dysregulation syndrome.
Treatment of Dyskinesias
Dyskinesias are involuntary movements and occur in two major forms— chorea and dystonia. Choreiform movements are irregular, nonrhythmic, unsustained dance-like movements that seem to flow from one body part to another and can appear like benign fidgeting. Dystonic movements are more sustained, twisting contractions. Many patients have a combination of choreiform and dystonic dyskinesias.
Peak-dose dyskinesias occur when the plasma concentrations of levodopa or dopamine agonists are at their peak, and the synaptic brain concentration of dopamine is too high. Reducing the individual dosage can resolve this problem of peak-dose dyskinesias. However, the patient may need to take more frequent doses at this lower amount. An alternative approach is to add amantadine, which suppresses the severity of dyskinesias, possibly because of its antiglutamatergic action. Start with a dose of 100 mg BID and increase up to 200 mg BID if necessary. Buspirone in dose up to 20 mg/day may also of benefit in treating dyskinesias in some patients.
Some patients may develop “off” dyskinesias. In the absence of so-called early morning “off” dystonia, which responds well to dopaminergic therapies, such patients are encouraged to consider DBS (see later under Surgical Therapy). Depending on their distribution within the body and the disability associated with them, dystonic dyskinesias can also be treated with local injections of a chemodenervation agent such as botulinum toxin. This treatment can be associated with a significant improvement in quality of life but will also weaken a muscle group and thereby can impact a patient’s function, particularly if the dystonic movements are occurring in the hands.
Diphasic dyskinesias are dyskinesias that occur at the beginning and end of dose, not during the time of peak plasma and brain levels of dopaminergic medications. They tend to particularly affect the legs with a mixture of chorea and dystonia.
Dopamine Medication–Related Nonmotor Complications
In addition to motor features, a number of nonmotor problems can also occur as complications from dopaminergic therapy. Mental changes of psychosis, confusion, agitation, hallucinations, paranoid delusions, punding, impulse control disorders, and excessive sleeping are probably related to activation of dopamine receptors in anteroventral striatal regions, or nonstriatal regions, particularly the cortical and limbic structures.
Drug-induced hallucinations tend to be mild, visual in nature rather than auditory, and not frightening. Consideration should be given to reducing the total dose of dopaminergic medication to whatever degree is tolerable for the patient. A complete review of medications is indicated as well to identify any other symptomatic treatments that might be worsening encephalopathy, including benzodiazepines, anticholinergics, and opioids. Adjunctive treatment can begin with the addition of either quetiapine, starting with 25 mg at bedtime or pimavanserin (17 mg 1–2 times daily, although dosing may need to be reduced in patients taking CYP3A4 inhibitors). Pimavanserin is a newer drug that targets serotoninergic neurotransmission through its action as an inverse agonist and antagonist at 5HT-2A receptors. Unlike quetiapine, pimavanserin has prospective clinical trial data supporting its efficacy for treating psychosis in PD. Even so, the relative safety and easy dosing of quetiapine have led to its continued use as symptomatic treatment. Currently, a head-to-head trial is underway aimed at comparing the safety and efficacy of these two drugs in PD with psychosis. The dose should be increased steadily until the hallucinations are brought under control. If quetiapine or pimavanserin are ineffective or if the hallucinations are frightening, clozapine, a stronger antipsychotic that will not worsen motor features of PD, should be considered. As mentioned previously, the reason clozapine is not the first drug of choice in dopaminergic-induced hallucinations is because clozapine causes agranulocytosis in approximately 1% to 2% of patients.
Patients must have their blood counts monitored weekly for this potential complication, and then discontinue the drug if leukopenia develops. Both
quetiapine and clozapine often cause drowsiness, so bedtime dosing is recommended.
If the psychosis is severe or if the patient is in an acute delirious state, hospitalization may be necessary, with immediate initiation of antipsychotic medications, and some reduction in anti-PD medication. These medications could even be withdrawn temporarily to overcome the psychosis, but this should be done stepwise over a 3-day period to avoid the neuroleptic malignant-like syndrome that could occur with sudden withdrawal of levodopa.
Dopamine agonist medications are associated with sleep attacks and impulse control disorders. Both of these issues can also be seen with levodopa but are much less common and less severe. Sleep attacks often manifest with a sudden wave of sleepiness that comes on with little warning and can be particularly dangerous for patients who are driving at the time.
Impulse control disorders consist of behavioral changes such as compulsive gambling, shopping, and eating, and hypersexual behaviors. Not surprisingly, these changes can often have significant detrimental effects on family relationships. Both of these side effects are, to some degree, dose-dependent and typically necessitate reducing the dose of the dopamine agonists or stopping them altogether. Memantine, an NMDA receptor antagonist, has shown to be of benefit in some patients with dopamine agonist–induced impulse control disorders in PD.
Nondopaminergic Therapies
Nondopaminergic agents (Table 61-7) are useful to treat both motor and nonmotor symptoms of PD. Anti-muscarinic drugs have been used since the 1950s to treat parkinsonian tremor but have limited efficacy and frequently lead to cognitive impairment and hallucinations in the elderly population. For this reason, antimuscarinics should be avoided in patients older than 70 years. Furthermore, exposure to antimuscarinic drugs has been linked to a higher risk of developing freezing of gait in PD.
TABLE 61-7 ■ NONDOPAMINERGIC AGENTS
Depression is common in patients with PD, and often precedes the motor symptoms of PD. Selective serotonin reuptake inhibitors (SSRIs), serotonin- norepinephrine reuptake inhibitors (SNRIs), and other antidepressants including bupropion and tricyclic antidepressants are useful antidepressants. If insomnia is a problem for the patient, using an antidepressant that is also a soporific can be doubly advantageous: medications such as the tricyclic nortriptyline (which has fewer anticholinergic effects than amitriptyline) or an SNRI, such as low-dose mirtazapine, are good options. Recent data also suggest cognitive behavioral therapy (CBT) may improve PD depression more than existing medication-based approaches and can be delivered through telephone/virtual appointments.
Benzodiazepines including clonazepam are effective in reducing symptoms of dream enactment behavior attributable to rapid eye movement (REM) sleep behavior disorder (RBD). Nevertheless, they should be used with caution given their potential for cognitive side effects, increased risk of falls, rebound anxiety, and addictive potential.
Psychosis induced by levodopa and the dopamine agonists can usually be controlled by quetiapine and clozapine without worsening the parkinsonism.
Other antipsychotic agents—be they typical or atypical neuroleptic mediations—are more likely to worsen the parkinsonism; therefore, they should be avoided. Clozapine is more effective than quetiapine, but because clozapine treatment requires weekly blood cell counts due to risk of agranulocytosis, low-dose quetiapine should be tried first.
Insomnia in PD requires a detailed history to distinguish specific causes of impaired sleep that may require different management approaches.
Common causes of insomnia in PD include restless legs, persistent tremor, nocturnal akinesia, dream enactment behavior, bladder dysfunction, and early morning motor “off” symptoms or dystonia. Both sleep-onset insomnia and sleep-maintenance insomnia occur in PD, though sleep-maintenance insomnia may be more common and troublesome. Sleep-onset insomnia is treated conventionally with low doses of hypnotics and sedating antidepressants such as low-dose mirtazapine or trazodone. In demented patients, low nighttime doses of the atypical antipsychotic quetiapine may be useful if neurobehavioral disturbances occur at night. Sleep maintenance insomnia is due often to motor dysfunction. Bradykinesia with difficulty moving in bed or adjusting bedclothes is a common cause of sleep maintenance insomnia in PD. Levodopa has a relatively short serum half-life and a common experience is loss of levodopa effect in the middle of the night with worsening bradykinesia and nocturnal arousals. Medication schedule manipulations such as instituting or increasing a bedtime dose of levodopa may be useful. Similarly, use at bedtime of extended-release levodopa preparations, adjunctive agents that lengthen the levodopa half-life, or dopamine agonists that possess relatively long half-lives may ameliorate this form of sleep-maintenance insomnia. An additional common source of sleep- maintenance insomnia in more advanced PD is bladder dysfunction, which in men with PD may coexist with prostate enlargement. Specifically, autonomic dysfunction leading to urinary frequency, urgency, and incontinence is common in more advanced PD. Conventional approaches to treating bladder dysfunction may be useful. Nocturnal use of gabapentin can also be of benefit in some patients.
RBD is common in patients with PD, DLB, and MSA and often precedes the appearance of these disorders. RBD manifests with complex, nonstereotyped dream-enactment behavior and is usually associated with vivid or frightening dream content. Normally, dreaming in REM sleep is associated with muscle paralysis to all skeletal muscles outside of the eyes
and diaphragm. This normal REM-associated muscle paralysis is reduced in RBD. RBD in PD, as in other settings, does not seem to cause daytime sleepiness but can result in injuries or disrupt bed partner rest. Infrequent and mild episodes of RBD probably do not require treatment but more severe episodes can be of dangerous to the bed partner or patient. Withdrawal of antidepressants that can precipitate or exacerbate RBD may be worthwhile.
Safety measures within the sleeping room may be necessary. These can include use of separate beds, placement of mattresses on the floor, efforts to sleep on the first floor, and removing dangerous objects from the bedroom. The mainstay of medical treatment is use of hour-of-bedtime clonazepam (0.5–2 mg), which appears to be effective and is tolerated well by the majority of patients. Melatonin—though less effective—may be tried first and can be combined with clonazepam. In some patients, cholinesterase inhibitors may also help RBD symptoms. Dopaminergic therapy probably has little or no effect on RBD.
Periodic leg movements of sleep (PLMS) and restless legs syndrome (RLS) are estimated to be twice as common in PD as in matched populations. Despite this association, the relationship between PD and RLS is complex.
While both RLS and PLMS may contribute to insomnia in PD, one study has shown no significant worsening in daytime sleepiness seen in PD patients with RLS compared to those without. No evidence exists that RLS predisposes patients to develop PD later in life. Though dopaminergic dysfunction may play a role in both PD and RLS, imaging studies of subjects with RLS without PD have not shown convincing evidence of a nigrostriatal dopaminergic deficit. These findings suggest that RLS may not in fact be a precursor to the cardinal motor symptoms of PD and may not be a “secondary” symptom of PD, but rather a separate disease entity that can be exacerbated by PD. Assessment and management of RLS in PD involve an evaluation for iron deficiency and iron supplementation when appropriate and use of low doses of dopaminergic agents such as long-acting dopamine agonists or L-dopa. In patients with poor or complicated responses to dopaminergic agents, gabapentin, clonazepam, or low-dose opiates may be useful.
Fatigue is an increasingly noted nonmotor symptom in PD of unclear etiology but has shown to be correlated with poor sleep and depression. Clinicians should attempt to differentiate fatigue from excessive daytime sleepiness, the latter of which is often due to poor quality of sleep at night or
adverse effects associated with excess dopaminergic medications, necessitating a different approach to diagnostic work-up and management. Many patients describe fatigue as their first presenting symptom.
Management should be aimed at underlying causes, such as depression. If needed, medications such as modafinil or nonprescription therapies such as liberalizing caffeine consumption can provide some benefit.
Mild cognitive symptoms can be seen in some PD patients with early disease and worsen with increasing disease duration. Early features typically include impaired attention, verbal memory, and executive dysfunction summarized as a subcortical-frontal syndrome. Dementia in PD is often associated with the development of significant visuospatial impairments, memory difficulties, and hallucinations—the latter of which may also be seen in response to dopaminergic or anticholinergic drugs. PDD and DLB patients are at a particularly high risk for developing delirium, either in association with new medications or because of underlying acute medical illnesses.
Symptoms of delirium often involve profound disorientation and psychosis that can sometimes take weeks to resolve. Cholinergic deficits affecting subcortical and cortical structures are thought to play a significant role in PD cognitive impairment. Cholinesterase inhibitors, including donepezil and rivastigmine, are useful therapies for improving cognitive symptoms at all stages of PD. They are, however, limited in their efficacy due to limited central nervous system (CNS) bioavailability.
Orthostatic hypotension is common in PD and can be due to the disease itself or to dopaminergic medications, the latter of which lower blood pressure. It can also represent an early manifestation of an atypical parkinsonian condition, namely MSA. Fludrocortisone, midodrine, or droxidopa can overcome this symptom to some extent.
Constipation is common in PD. It may be further aggravated by anticholinergic medications. Besides changing dietary habits by increasing intake of more fiber and dried fruits, polypropylene glycol, or lubiprostone can be effective. For those who have bloating because of suppression of peristalsis when they are “off,” keeping them “on” with levodopa can be beneficial.
Diet
There is emerging evidence that dietary pattern may modulate the course of PD, including at the prodromal level. A recent analysis of the Nurses’ Health
Study found that adherence to a Mediterranean diet was inversely associated with the presence of prodromal PD features, including constipation, excessive daytime sleepiness, and depression. Adherence to the Mediterranean diet is also associated with lower risk of PD. These studies are consistent with clinical research models highlighting the relevance of gut- brain axis functions in PD.
Exercise and Physical Activity
An active exercise program encourages patients to have ownership over their own care, allows muscle stretching and full range of joint mobility, increases aerobic capacity, muscle strength, motor skills, and improves a patient’s mental attitude toward fighting the disease. Preclinical studies have shown that exercise slows the degeneration of dopamine neurons following local toxin, theoretically because exercise leads to an increase in brain neurotropic factors. There is also increasing evidence to suggest that sedentary behaviors, irrespective of the amount of formal exercise one performs, may have a deleterious impact on not only physical condition but also metabolic functions. These changes increase the risk for frailty among patients with PD subjects leading a decline in health. Encouraging patients to reduce the amount of time they spend seated each day is a good way to empower patients to regulate their own PD prognosis.
A regular routine of physical exercise, be it cardiovascular training or weight-based exercises, should be implemented as soon as the diagnosis is made, but is useful in all stages of disease. Stretching exercises may help to compensate for the tendency of patients to have a reduced range of motion. In moderate-to-advanced stages of PD, formal physical therapy is more valuable by keeping the joints from becoming frozen, and by providing guidance how best to remain independent in mobility, particularly with gait training and prevent injurious falls. One of the nonmotor symptoms of PD is the tendency toward apathy and conservative decision making with decreased motivation. Encouraging activity may help fight these symptoms.
Surgical Therapy
DBS surgery for PD was approved by the US FDA in 2002 and is associated with significant gains in quality of life for PD patients who are good surgical candidates. DBS surgery is typically indicated for PD patients with difficult- to-manage motor complications (fluctuations and/or dyskinesias) or with
medication-refractory parkinsonian tremor. DBS involves placement of an impulse pulse generator (IPG) in the chest that looks much like a pacemaker. A lead from the IPG is tunneled under the skin surface to a specific region of the brain, either the STN or the GPi.
When stimulation is optimized, patients will experience an “on” state without disabling “off” features. Patients are also able to reduce their dose of dopaminergic medications, and in this way, are able to reduce dyskinesias as well. With the exception of tremor, motor features that generally do not improve with dopaminergic medications (eg, postural instability, other axial motor features, some gait freezing) also do not improve with DBS. DBS typically has little effect on the nonmotor features of PD unless they are directly related to on-off fluctuations seen with dopaminergic medications.
Currently, DBS for PD is delivered in an “open loop” context, meaning that the type and intensity of stimulation delivered to each electrode is programmed and occurs constitutively once a stimulation paradigm is turned on in the IPG. The next important advance in DBS care will be the testing and validation of “closed loop” systems that can tailor stimulatory inputs to the brain based on DBS-detection of local neurophysiologic biomarkers that fluctuate in real-time depending on the neural correlates of relevant volitional movements.
Patients with significant speech, gait, depression with suicidal ideation or cognitive difficulties are usually not good candidates for DBS, not only because these symptoms do not respond to stimulation, but also because in some patients, these features may become notably worse after DBS, including the risk of suicide. The selection of appropriate candidates for DBS is often best done under the guidance of an experienced movement disorder neurologist. DBS is not suggested for patients with atypical parkinsonian conditions since the majority of these motor features do not respond to dopaminergic medications.
Focused ultrasound (FUS) was FDA-approved in 2018 for the treatment of parkinsonian tremor; it delivers a radiofrequency ablation to the STN and can be directed to other structures as well. FUS does not involve anesthesia and may be an appropriate therapy for those surgical candidates who cannot or do not wish to undergo conventional DBS surgery. One relative advantage is that unlike DBS, FUS does not necessitate future IPG replacements or frequent outpatient programming sessions. A disadvantage though is that FUS is permanent and nonprogrammable; compared to DBS, this limits FUS’s
ability to be titrated to an individual’s most disabling symptoms as the disease advances.
CARE CONSIDERATIONS
No two patients with PD are alike in their clinical presentation or their rate of disease progression. In addition, not all motor features of PD have the same clinical or prognostic significance. Gait and cognitive difficulties, for example, play a more significant role in patient autonomy, disease staging, and overall disability. Different motor phenotypes in PD have been distinguished, such as tremor-predominant or imbalance-predominant (so- called postural instability and gait difficulties [PIGD]) subtypes. PIGD features, however, tend to be less responsive to common medication strategies including dopaminergic therapies and can worsen at variable rates for reasons that appear to have little to do with dopaminergic neurotransmission.
There is an increasing body of literature implicating nondopaminergic or extranigral brain changes seen with aging as a mediator of disease progression in PD. Cerebral amyloid deposition and a decline in cerebrovascular integrity occur as part of brain aging. In individuals without PD, when these progressive changes exceed a critical threshold, patients who develop clinical symptoms are diagnosed with specific disorders including AD and vascular dementia. When these changes are milder, however, most people are able to use existing neuronal reserve to compensate and prevent the development of clinically significant disability. In PD, where the severe loss of striatal dopaminergic neurons is seen from the earliest stage of diagnosis, neuronal compensation mechanisms are relatively impaired, leading to the development of clinically significant disease features in the presence of even low levels of age-related comorbid brain pathologies (Figure 61-2). Longitudinal studies of PD have suggested that the severity of medical comorbidities is a chief determinant of progression to a disability, dementia, and death. For this reason, we recommend that all patients with PD attend carefully to common chronic medical conditions including cardiovascular risk factor reduction through physical exercise, appropriate diet, and medication management of comorbidities like hypertension and diabetes mellitus. The longitudinal role
of a geriatrician or primary care doctor is hence indispensible for individuals with PD.
FIGURE 61-2. Schematic diagram depicting the natural history of progressive changes in Parkinson disease (PD) attributable to both dopaminergic and nondopaminergic (“extranigral”) pathologies. With advancing disease duration, medical comorbidities and other neuropathologies exert additive/synergistic effects on overall motor and nonmotor disease burden in PD.
Recent data suggest that specialist care through a neurologist specifically is associated with reduced mortality in PD. Neurologists who are familiar with PD can be instrumental in making medication adjustments over time and can serve as an access point for newly approved medications and for DBS once patients have progressed from early-stage to mid-stage PD characterized by motor fluctuations. Multidisciplinary care models for delivering rehabilitative services have been trialed against standard physical therapy in carefully controlled settings and have been shown to improve quality of life. Home-based exercise programs have also been shown to improve off-state motor function in PD in recent randomized trials.
Developing reimbursement models to initiate, sustain, and broaden the impact of these clinical trial findings is an important priority for PD patients, advocates, and clinicians. Patients with late-stage PD or with significant disability from atypical parkinsonian conditions often benefit from
consultations with palliative care physicians. This is particularly helpful for later-stage patients who are either confined to a wheelchair or who are considering nursing home placement because of inability to perform activities of daily living. Progressive dysphagia can be seen in some patients with advanced PD or atypical parkinsonian conditions and may necessitate a discussion about feeding tube placement. Discussing goals of care with patients at risk for loss of autonomy can empower patients to have control over a difficult disease.
CONCLUSION
Parkinsonian conditions including PD are common sources of disability in older individuals and are typically responsive to a wide variety of treatments. Longitudinal care with skilled geriatric and neurology providers is perhaps the most important step to ensure an individualized medical approach that prioritizes quality of life.
ACKNOWLEDGMENTS
We wish to thank Dr. Stanley Fahn, the previous author of this chapter.
FURTHER READING
Braak H, Ghebremedhin E, Rub U, Bratzke H, Del Tredici K. Stages in the development of Parkinson’s disease-related pathology. Cell Tissue Res. 2004;318(1):121–134.
Chaudhuri KR, Healy DG, Schapira AH. Non-motor symptoms of Parkinsons disease: diagnosis and management. Lancet Neurol. 2006;5(3):235–245.
Cheng EM, Tonn S, Swain-Eng R, Factor SA, Weiner WJ, Bever CT Jr. Quality improvement in neurology: AAN Parkinson disease quality measures: report of the Quality Measurement and Reporting Subcommittee of the American Academy of Neurology. Neurology. 2010;75(22):2021–2027.
Dauer W, Przedborski S. Parkinson’s disease: mechanisms and models.
Neuron. 2003;39(6):889–909.
Deuschl G, Schade-Brittinger C, Krack P, et al. A randomized trial of deep- brain stimulation for Parkinson’s disease. N Engl J Med.
2006;355(9):896–908.
Glimcher PW. Understanding dopamine and reinforcement learning: the dopamine reward prediction error hypothesis. Proc Natl Acad Sci U S A. 2011;108(suppl 3): 15647–15654.
Klein C, Schlossmacher MG. Parkinson disease, 10 years after its genetic revolution: multiple clues to a complex disorder. Neurology.
2007;69(22):2093–2104.
Krack P, Batir A, Van Blercom N, et al. Five-year follow-up of bilateral stimulation of the subthalamic nucleus in advanced Parkinson’s disease. N Engl J Med. 2003; 349(20):1925–1934.
Langston JW. The Parkinson’s complex: parkinsonism is just the tip of the iceberg. Ann Neurol. 2006;59(4):591–596.
Marras C, Lang A. Invited article: changing concepts in Parkinson disease: moving beyond the decade of the brain. Neurology. 2008;70(21):1996– 2003.
Martínez-Fernández R, Máñez-Miró JU, Rodríguez-Rojas R, et al.
Randomized trial of focused ultrasound subthalamotomy for Parkinson’s disease. N Engl J Med. 2020;383:2501–2513.
Miyasaki JM, Shannon K, Voon V, et al. Practice parameter: evaluation and treatment of depression, psychosis, and dementia in Parkinson disease (an evidence-based review): report of the Quality Standards Subcommittee of the American Academy of Neurology. Neurology.
2006;66(7):996–1002.
Pahwa R, Factor SA, Lyons KE, et al. Practice parameter: treatment of Parkinson disease with motor fluctuations and dyskinesia (an evidence- based review): report of the Quality Standards Subcommittee of the American Academy of Neurology. Neurology. 2006;66(7):983–995.
Postuma RB, Gagnon JF, Montplaisir JY. REM sleep behavior disorder: from dreams to neurodegeneration. Neurobiol Dis. 2012;46(3):553–558.
Trinh J, Farrer M. Advances in the genetics of Parkinson disease. Nat Rev Neurol. 2013;9(8):445–454.
Williams-Gray CH, Mason SL, Evans JR, et al. The CamPaIGN study of Parkinson’s disease: 10-year outlook in an incident population-based cohort. J Neurol Neurosurg Psychiatry. 2013;84(11):1258–1264.
Chapter
Cerebrovascular Disease
Nirav R. Bhatt, Bernardo Liberato
Although stroke is the fifth leading cause of death in the United States, it remains the single most important cause of disability. Ischemic stroke and transient ischemic attack (TIA) are parts of the same spectrum, and their diagnosis usually implies inadequate blood flow to variable areas in the brain, brain stem, or cerebellum. Many varied pathologic processes lead to either occlusion of an extra- or intracranial artery or vein causing ischemic stroke or TIA, or rupture of an intracranial artery causing hemorrhagic stroke. A precise clinical diagnosis with appropriate localization strategies and determination of likely etiology are key to establishing appropriate treatment in the acute phase as well as planning the most adequate secondary prevention according to best practice and evidence available (Figures 62-1 and 62-2). This chapter focuses on identifying the pathology, clinical features, and treatment strategies that allow for the proper care of stroke victims, with a particular emphasis in the older population.
FIGURE 62-1. A. Arrangement of the major arteries of the right side carrying blood from the heart to the brain. Also shown are vessels of collateral circulation that may modify the effects of cerebral ischemia (a, b, and c). Not shown is the circle of Willis, which also provides a source for collateral circulation. a. The anastomotic channels between the distal branches of the anterior and middle cerebral artery, termed borderzone or watershed anastomotic channels.
Note that they also occur between the posterior and middle cerebral arteries and the anterior and posterior cerebral arteries. b. The anastomotic channels occurring through the orbit between branches of the external carotid artery and ophthalmic branch of the ICA. c. Wholly extracranial anastomotic channels between the muscular branches of the ascending cervical arteries and muscular branches of the occipital artery that anastomose with the distal VA. Note that the occipital artery arises from the external carotid artery, thereby allowing reconstitution of flow in the vertebral from the carotid circulation. B. Diagram of the brain stem, cerebellum, inferior right frontal lobe, and temporal lobe transected. Principal branches of the vertebral basilar arterial system are pictured. Small branches of the vertebral and basilar artery that penetrate the medulla and pons are not pictured. The stem of the middle cerebral artery with its small, deep-penetrating lenticulostriate arteries and the circle of Willis with its small, deep- penetrating branches, are shown. C. Roman numerals I, II, III, and IV represent some of the possible variations of the circle of Willis caused by atresia of one or more of its arterial components. (A, Reproduced with permission from CM Fisher, MD.)
EPIDEMIOLOGY
Stroke is a leading cause of mortality and morbidity in the United States. In 2018, according to statistics from the Centers for Disease Control and Prevention (CDC) and American Heart Association (AHA), one in every six deaths from cardiovascular disease (CVD) was due to stroke. In the United States, someone has a stroke every 40 seconds and every 4 minutes one death is due to a stroke. That burden is greater in the older population since stroke incidence and mortality increase with age which is reflected in the fact that
approximately 66% of people hospitalized for a stroke will be 65 years or older. Stroke incidence becomes more evident in the aging population and its impact on greater longevity is demonstrated by the fact that 17% of all strokes will occur in patients 85 years and older. According to the AHA, stroke incidence is expected to more than double by 2050 with the greater increase in the population 75 years and older, underscoring the need for greater awareness and knowledge about stroke among those health professionals involved in the care of that group of patients. As in CVD in general, incidence by gender shows a predominance in men in the 60- to 79- year-old age group while women are slightly more affected in the 80 years and older age group.
Learning Objectives
Understand the pathophysiology and clinical presentations of different types of strokes, and specific symptoms and signs associated with the involvement of each major cerebral artery and location of infarction.
Learn about the state-of-the-art neuroimaging techniques and other tests available to evaluate and diagnose stroke.
Acquire latest information about the pharmacology, specific indications, and adverse effects of various drugs and surgical interventions available to treat different stages and types of strokes.
Learn about the results of pivotal clinical trials forming the basis for latest guidelines to treat different types of strokes.
Key Clinical Points
It is important to identify the pathophysiology in each stroke patient, as it drives selection of the best treatment choice.
Magnetic resonance imaging (MRI) brain scans with diffusion- weighted imaging (DWI) are the best way of identifying cerebral infarcts acutely and accurately. While computed tomography (CT) scan is less sensitive to detect recent infarcts of less than 12 hours, it is the initial method of choice in the hyperacute
Understand the rationale for various preventive strategies commonly used for different types of strokes.
setting since most acute treatment decisions can be made based on its results.
Imaging of the cervical and cerebral large arterial system with CT angiography (CTA) or MR angiography (MRA) should be urgently performed to assess the arterial system. CTA offers the best resolution and suffices for acute decision making.
Recombinant tissue plasminogen activator (rt-PA) initiated within 4.5 hours of symptom onset has been shown to reduce stroke-related disability. In general, benefits of rt-PA in older adults with stroke outweigh risks.
Mechanical thrombectomy for selected patients with large vessel occlusion is highly effective in reducing morbidity and mortality and is supported by Level Ia evidence.
Stroke carries a worse prognosis in older individuals overall; however, both intravenous rt-PA and mechanical thrombectomy have been found effective in this population and should be strongly considered when clinically appropriate.
Efficacy of carotid endarterectomy and carotid stenting for symptomatic disease with more than 70% stenosis is high and should be performed when clinically indicated. Clinical, demographic, and technical aspects should be considered when choosing the best method.
Oral anticoagulation is highly effective in preventing stroke in the setting of atrial fibrillation (AF) especially in the older population. Direct oral anticoagulants (DOACs) are used increasingly and present a better safety profile compared to warfarin.
Antiplatelet monotherapy continues to be the cornerstone for secondary prevention of noncardioembolic strokes, with dual antiplatelet therapy reserved for a short course in high-risk TIA/minor stroke. Dual antiplatelet therapy can also be considered in cases of strokes secondary to moderate to severe stenosis seen in intracranial atherosclerotic disease (ICAD).
Overall, prognosis is worse following a stroke in the older adults, and is associated with a higher risk-adjusted mortality, greater disability, longer hospitalization, and reduced chances of being discharged home after an admission to a hospital. However, despite an overall worse prognosis in older adults compared to younger age groups, more aggressive therapeutic and multitargeted secondary prevention strategies have resulted in more favorable stroke outcomes in the older population, including a decline in the crude stroke death rate. Importantly, these promising outcomes are seen across all older age groups including those 85 years and older.
ISCHEMIC STROKE OR TIA SUBTYPE
Pathophysiology and Clinical Presentation
It is important to recognize that ischemic stroke and TIA share the same pathological causation, such that efforts to define the underlying arterial pathophysiology of the ischemic stroke or TIA should be the focus of a treatment strategy. The term “cerebral vascular accident” or “CVA” should be discarded, and the terms “TIA” and “ischemic stroke” should be used with further characterization of the subtype of stroke according to its likely etiology, a definition that goes beyond nomenclature and carries implication when choosing the most appropriate therapeutic and prevention strategies.
Despite more recent challenges to the classic etiologic categorization, in clinical practice, ischemic stroke/TIA can still be conveniently divided into five subtypes: (1) large artery atherosclerosis, including intra- and extracranial (25%); (2) small-vessel lacunar (25%); (3) cardioembolic
(20%); (4) cryptogenic (25%); and (5) other (5%), such as arterial dissection, venous sinus occlusion, and arteritis. The prevalence of various ischemic strokes or TIA subtypes vary across different ethnic population groups. Specifically, African-Americans are at a relatively higher risk of having a lacunar stroke and atherothrombotic stroke, particularly that portion of atherothrombotic stroke caused by intracranial arterial atherosclerosis.
Likewise, Asians have a higher frequency of intracranial arterial atherosclerotic disease. Conversely, extracranial atherosclerotic disease shows a predilection for Caucasians.
When transient or sustained focal neurologic symptoms or signs develop in a patient, history, general physical examination, and neurologic examination are important to diagnose stroke, localize the affected territory of the brain or spinal cord and the corresponding vascular distribution, and even suggest the pathophysiologic subtype. Strokes and TIAs require not only immediate imaging of the brain parenchyma but also noninvasive assessment of the extra- and intracranial arterial vasculature focusing on the arteries supplying the suspected symptomatic arterial territory. CT scan of the head is the neuroimaging modality of choice for the hyperacute evaluation of a suspected stroke, mostly to rule out other pathologies that could mimic an ischemic stroke, particularly hemorrhagic stroke. CT has nearly perfect sensitivity for acute intracerebral hemorrhage (ICH) (approaching 100%) and good sensitivity for acute subarachnoid hemorrhage (SAH) (~ 90%).
Sensitivity for ischemic infarction in the acute setting is, however, much lower. Ischemic infarction may not be demonstrable by noncontrast CT for 12 to 14 hours after symptom onset. In addition, infarction involving only the cortical surface supratentorially, or infarction in the posterior fossa, can often be obscured by bone artifact. Therefore, the main reason for obtaining a head CT scan in the acute phase is to exclude intracranial hemorrhage and detect early signs of ischemia as well as define the extent of early ischemic changes. In the acute phase, vascular imaging, usually with CTA of head and neck, is mandatory to look for vessel pathology that could be responsible for the ischemic changes suggested by the neurological examination. Only with the precise knowledge of parent vessel pathology, or its absence, can therapy be properly considered. Also, given the well-defined role for mechanical thrombectomy (MT) in select patients with a large vessel occlusion (LVO) demonstrated on CTA, such imaging modality is essential part of the acute stroke evaluation. Even though MRI is the gold standard for identification of an ischemic stroke, its role in the hyperacute evaluation is limited given the longer examination time. Brain MRI, however, is indicated for most ischemic stroke patients early on after hospital admission, and other vascular imaging modalities that can be considered include a combination of MRA, carotid duplex ultrasound, and transcranial Doppler (TCD) (see Figure 62-2). A combination of these imaging modalities is often necessary before a definite etiology can be determined and an appropriate preventive strategy can be devised. The following sections outline each of the four ischemic stroke and TIA subtypes, and intracerebral and SAH, in terms of their pathophysiologic
process and clinical presentation. A discussion of a focused diagnostic approach to confirm that clinically presumed diagnosis follows. Based on the particular TIA or stroke subtype and its causative pathologic process, acute, subacute, and preventive management strategies can then be addressed.
FIGURE 62-2. A. Diagram of a cerebral hemisphere, lateral aspect, showing the branches and distribution of the middle cerebral artery and the principal regions of cerebral localization. Note the bifurcation of the middle cerebral artery into a superior and inferior division. B. Diagram of a cerebral hemisphere, medial aspect, showing the branches and distribution of the anterior
cerebral artery and the principal regions of cerebral localization. (Reproduced with permission from CM Fisher, MD.)
Definition
The classical definition of a TIA is that of sudden focal neurological symptoms lasting less than 24 hours. This definition is purely a clinical one and does not take into consideration neuroimaging findings on MRI after an acute ischemic episode. The more complete definition establishes that if an acute focal neurological dysfunction shows an imaging correlate (DWI positivity on MRI) it is considered a stroke regardless of the persistence or not of the initial focal symptoms. In that case, it is called a minor stroke. For the majority of patients with persistent focal deficits beyond 24 hours (clinical stroke), neuroimaging will show a correlated lesion on MRI. The semantics do not carry great relevance for the diagnostic work-up for these patients since both a TIA and a stroke demand urgent and complete investigation. Its major importance lies in the fact that TIA or minor strokes carry an overall risk of about 10% for recurrent stroke in the following 90 days, the greatest risk being in the first 48 hours after its occurrence. This risk varies with the type of underlying vessel pathology, neuroimaging, and clinical features of the event. Based on clinical features, scales have been created to help stratify the short-term risk of recurrence of a TIA. The most commonly used scale is the ABCD2 scale where clinical features, when present, receive points and the sum of them will translate into higher risk of recurrence. The clinical features and the points assigned to each are as follows:
Age > 60 years – 1 point
Blood pressure on presentation > 140/90 – 1 point
Clinical features of the TIA—isolated speech (1 point), unilateral weakness (2 points)
Duration—< 10 min – 0 points; 10–59 min – 1 point; ≥ 60 min – 2 points
History of Diabetes—1 point
The sum of the points yields the final score and a score ≥ 4 represents an
elevated risk for recurrence. This score was used in many acute prevention trials as criteria for enrolling patients (see below). The early time frame for
highest recurrence risk underscores the need for urgent evaluation and implementation of therapeutic and preventive strategies after a TIA or minor stroke.
STROKE SUBTYPES
Large-vessel, athe rothrombotic stroke/TIA subtype Atherothrombotic cerebral vascular disease accounts for approximately 25% of all ischemic strokes. To determine that a stroke is secondary to large-vessel atherosclerotic disease, it is required that the artery supplying the ischemic territory (extra- or intracranial) reveals a degree of vessel lumen stenosis > 50% of the normal lumen. It is divided into two categories according to the site of atherothrombotic involvement:
Extracranial atherosclerotic disease (ECAD)
Intracranial atherosclerotic disease (ICAD)
When considered as a group, atheromatous process commonly has a
predilection for four extra- and intracranial arterial locations (see Figure 62- 2): (1) the internal carotid artery (ICA) origin, (2) the carotid siphon portion,
(3) the middle cerebral artery stem, and (4) the vertebrobasilar junction. Although the origins of the common carotid and vertebral arteries (VA) also are sites of atheromatous disease, they are less often the cause of stroke or TIA. Atherosclerotic narrowing can also involve the proximal intracranial VA and origins of the posterior cerebral arteries (PCAs), but rarely involve the more distal branches of the cerebral and cerebellar arteries. At each of the sites of predilection, several mechanisms are possible and responsible for causing the stroke or TIA: (1) Embolism: thrombus forms on an atherothrombotic plaque and a piece of clot travels distally to occlude a more distal vessel. This is called an “artery-to-artery” embolus. (2) Thrombus propagation: this is less common, but if the thrombus propagates to occlude a distal vessel off the circle of Willis, large strokes can occur. (3) Hypoperfusion: the atherothrombotic process might occlude or narrow the vessel to such a degree that distal flow is diminished and not compensated via collateral flow through the circle of Willis. This low-flow state may result in border-zone infarctions where an area between the middle cerebral artery-anterior cerebral artery (MCA-ACA) or MCA-PCA territories is
affected cortically or in the distal field of the lenticulostriate penetrating arteries, affecting the deep white matter.
In general, both artery-to-artery embolism and low-flow strokes occur when the vessel is narrowed to a degree that decreases pressure across the arterial segment. Low-flow stroke or TIA occurs less often from a cervical lesion because the circle of Willis can usually provide needed distal collateral circulation when a proximal stenosis becomes hemodynamically significant. Low-flow stroke or TIA occurs more often with atheromatous disease in the intracranial vasculature as this compromises the ability of the circle of Willis to provide sufficient collateral flow. In 70% of the population, the circle of Willis is incompetent, with one or more of the connecting arteries atretic or functionally inadequate (see Figure 62-2). In this circumstance, low-flow TIA or stroke may arise from atherothrombosis at the ICA origin or in its petrous or siphon portions.
Atherothrombotic disease of the anterior cerebral circulation (origin of the ICA, its major branches, and the common carotid artery) In the anterior circulation, atheroma occurs most often at the bifurcation of the common carotid artery (CCA), and usually begins on the posterior wall of the ICA origin.
Internal carotid artery: Most often, atheroma at the origin of the ICA becomes symptomatic after it has narrowed the lumen to the point where the pressure begins to drop across the stenosis, allowing both embolic or low-flow ischemic TIA and stroke to occur. Embolism from thrombus forming in an ulcerated plaque may occur at 50% to 70% stenosis, but it is less common, and rarely occurs with lesser degrees of stenosis. Artery-to-artery embolism can also occur if the atheromatous process occludes the ICA origin forming a thrombus at the site. At times, the occluding thrombus may also propagate without embolization, reaching the ophthalmic artery origin and producing monocular blindness, or extending even more distally to the MCA origin and producing a devastating, full-territory stroke.
Low-flow symptoms caused by ICA origin stenosis are less common than artery-to-artery embolism, and occur only if two conditions exist: (1) the lesion has to be hemodynamically significant, that is, severe enough to provoke a drop in pressure across the lesion,
(2) recruitment of the main collateral channels (circle of Willis and
anastomotic connections between the external carotid artery and ophthalmic artery) is inadequate, leading to a low-pressure state either in the MCA or in one or both of the anterior cerebral arteries. If there is ICA occlusion and little distal collateral flow, then a complete middle cerebral syndrome may result. The PCA territory may also be vulnerable to low flow in the setting of carotid disease when this artery arises directly from the ICA, a variant called “fetal PCA” that is estimated to occur in up to 30% of the population. If the circle of Willis is complete, then occlusion of the ICA can be asymptomatic if it does not have an associated embolic or propagated thrombotic component.
The signs and symptoms of artery-to-artery embolism from the ICA are variable and depend on which intracranial branches are affected. The MCA, receiving majority of the blood flow originating from the ICA will be the most often affected but different symptoms can be the presentation of distal embolization, including from small emboli to the ipsilateral ophthalmic artery. In that case the symptom is called amaurosis fugax, in which a descending shade affecting the ipsilateral eye is usually described by the patient as a brief, usually self-limited phenomenon. In more extreme cases complete and persistent visual loss can occur, usually secondary to a central retinal artery occlusion (CRAO). This constitutes a neuro-ophthalmological emergency and should prompt emergent evaluation for the possibilities of an ipsilateral carotid disease either of atherosclerotic or inflammatory origin (giant cell arteritis [GCA]).
Atheromatous disease in the ICA siphon occurs less often but shares the same type of physiologic mechanisms for TIA and stroke as seen with atheromatous disease of the ICA origin. More proximal involvement, with CCA stenosis or occlusion, is much less common than ICA involvement and can present with similar symptoms.
MCA: MCA territory symptoms can be divided into those involving the stem territory (M1 segment; see Figure 62-2) and those involving the superior or inferior territory divisions (M2 segments) or one of their cortical surface branches (see Figures 62-1 and 62-2). When the stem of the MCA is occluded (M1 occlusion), a complete MCA syndrome may occur. It produces a complete contralateral hemiplegia and sensory loss involving the face, arm, hand, leg, and foot. Gaze
deviation toward the ischemic hemisphere occurs due to involvement of the frontal eye fields. Ischemia in the dominant hemisphere causes global or partial aphasia and ischemia of the nondominant hemisphere results in neglect (visuospatial and tactile) and anosognosia. The degree of cortical involvement, usually evident clinically by the presence of gaze deviation, aphasia, or neglect, depends on the level of occlusion and the degree of cortical surface collateral flow (see Figure 62-1). Smaller emboli cause single superior or inferior branch of the MCA syndromes, or partial branch syndromes. Superior division infarcts, typically present with either isolated contralateral weakness or isolated expressive aphasia or a combination of the two. Inferior division syndromes include difficulty with reading, writing, auditory comprehension of language, or fluent aphasic speech with no limb weakness. Paraphasic errors are common. Neglect of the left visual hemifield and extinction to double tactile stimulation are signs of cortical involvement seen most often in nondominant hemispheric syndromes but may also occur on the right with dominant syndromes.
Anterior cerebral artery (ACA): ACAs divide into two segments: A1 or stem, a part of the circle of Willis and A2 segment, distal to the anterior communicating artery (see Figure 62-1). The A1 segment gives rise to several deep penetrating arteries that supply the anterior limb of the internal capsule, the anterior perforated substance, portions of the anterior hypothalamus, and the posterior part of the head of the caudate nucleus. Infarction in these territories is more often caused by an embolus than by local atheromatous disease. ACA infarcts result predominantly in contralateral leg weakness with varying degrees of contralateral shoulder weakness. If the right and left A2 segments arise from a single anterior cerebral artery A1 segment due to contralateral hypoplastic A1 segment, a normal anatomic variant, an occlusion of this single A1 segment causes bilateral frontal lobe infarction resulting in bilateral leg weakness. Other symptoms of ACA infarcts include urinary incontinence, abulia, gait apraxia, and forced grasping of the hand.
Anterior choroidal artery (AChA): This artery arises from the ICA and supplies the posterior limb of the internal capsule and its adjacent white matter and medial temporal lobe, also supplying some
geniculocalcarine fibers. The complete clinical syndrome consists of contralateral hemiparesis, hemianesthesia, and hemianopia.
However, because this territory is also supplied by penetrating vessels of the MCA stem and the posterior communicating and the posterior choroidal arteries, syndromes with minimal deficits can occur.
Atherothrombotic disease of the posterior cerebral circulation: vertebrobasilar and posterior cerebral arteries and their branches As seen in the anterior circulation, atherosclerosis has a predilection for certain parts of the posterior circulation—namely the proximal origins of the VA, which falls under the category of ECAD, the distal (intracranial) VA, the proximal to mid-basilar artery, and the proximal PCA, which fall under the category of ICAD (see Figure 62-1).
Vertebral and posterior inferior cerebellar artery: An occlusion of the distal vertebral or its major branch, the posterior inferior cerebellar artery (PICA), may be caused by either atherothrombosis or by embolism from a proximal arterial source or the heart. VA dissection is another possibility. Atherothrombotic VA stroke is often heralded by TIA or minor stroke. Occlusion of either the VA or the PICA produces infarction in the lateral medulla, resulting in the lateral medullary (Wallenberg) syndrome. The symptoms and signs vary, but more commonly include vertigo, nausea and vomiting, hoarseness, dysphagia, ipsilateral facial numbness associated with impaired sensation of pain and temperature over the ipsilateral face and contralateral arm and leg, ipsilateral Horner syndrome, and ipsilateral limb ataxia. The PICA also supplies the posteroinferior cerebellum that may become infarcted if collateral circulation from the superior cerebellar artery (SCA) is inadequate. The infarct resulting from vertebral occlusion does not differ anatomically from that produced by PICA occlusion, except for a greater involvement of the restiform body (inferior cerebellar peduncle) in the latter. With moderate to large areas of cerebellar infarction, edema might occur and be fatal if not detected early and suboccipital craniectomy performed to relieve the mass effect on the brain stem.
Basilar artery: TIA usually precedes atherothrombotic basilar artery occlusion and the consequent accompanying devastating brain stem infarction. The symptoms of a TIA in the territory of the distal
vertebral and the basilar artery are more varied than in the carotid– middle cerebral territory because of the many different anatomic structures involved. Moreover, brain stem TIAs may be caused by disease of either of the small penetrating branches of the basilar or VA or disease of the basilar or VA themselves. Penetrating branch disease may be due to atherothrombosis, involving the proximal origins of these small branch vessels, or lipohyalinosis, involving the small vessels deeper in the brain stem (see “Lacunar stroke/TIA subtype” later in this chapter). Therefore, when brain stem TIA or acute stroke occurs, it is extremely important to determine whether the problem lies in the basilar artery or in one of its smaller branches. Disease of a basilar branch produces unilateral infarction, whereas disease of the basilar artery itself usually causes bilateral infarction. Transient dizziness associated with diplopia, dysarthria, and numbness around the mouth strongly indicates the presence of basilar insufficiency. Other important symptoms occurring less often include a general profound feeling of weakness of the entire body, staggering, and/or a feeling of propulsion. Bilateral signs such as gaze paresis or internuclear ophthalmoplegia (INO) associated with ipsilateral sensory loss or weakness signify ischemic infarction in both sides of the pons, and therefore exclude single penetrating branch disease as the culprit.
Syndromes of unilateral brain stem infarction typically involve some combination of ipsilateral signs of the head and face, from involvement of cranial nerve nuclei or their fascicles, and contralateral motor and sensory signs in the limbs, from involvement of ipsilateral crossed long tracts, such as the corticospinal tract or spinothalamic tract respectively.
Major basilar branches—anterior inferior cerebellar artery (AICA), SCA, PCA: These major branches of the basilar artery produce their own distinct pathophysiologic syndromes. They are most often caused by artery-to-artery embolism from an atherothrombotic source within the proximal basilar artery or the VA. An aortic or cardiogenic embolic source can be found, especially when the SCA is involved. Rarely, primary atherothrombotic stenosis or occlusion at their origins is the cause of the stroke or TIA.
SCA: Occlusion of the SCA results in one or more of the following symptoms: ipsilateral cerebellar ataxia (caused by ischemia of the middle and/or superior cerebellar peduncle, or dentate nucleus); nausea and vomiting; dysarthria and contralateral loss of pain and temperature sensation over the extremities, body, and face (caused by ischemia of the spinal and trigeminal thalamic tract); and ataxic tremor or choreiform movements of the ipsilateral upper extremity. Ipsilateral Horner syndrome can be present. Partial syndromes occur frequently. Due to involvement of the cerebellar vermis, a pronounced truncal ataxia and gait impairment can be seen.
SCA territory infarction should prompt a thorough investigation for a potential embolic source.
Anterior inferior cerebellar artery (AICA): The territory it supplies usually includes the lateral midpons, middle cerebellar peduncle, cerebellum, and the labyrinth and cochlea. The principal symptoms may include ipsilateral deafness, facial weakness, vertigo, nausea, vomiting, nystagmus, tinnitus, cerebellar ataxia, Horner syndrome, and paresis of conjugate lateral gaze. Contralateral loss of pain and temperature sensation is also seen.
Paramedian and short circumferential branches of the basilar artery: Occlusion of one of short circumferential branches of the basilar artery affects the lateral two-thirds of the pons and/or middle or superior cerebellar peduncle, whereas occlusion of one of the paramedian branches of the basilar artery affects a wedge-shaped area on either side of the medial pons. Many brain stem syndromes with cranial nerve abnormalities and crossed hemiplegia have been described.
PCA: Arising from the bifurcation at the top of the basilar artery, each PCA divides into two segments: (1) P1 (proximal) segment, beginning at the top of the basilar artery and extending to the posterior communicating artery takeoff, with penetrating branches to
the subthalamus, thalamus, and midbrain, (2) P2 segment, beginning at the posterior communicating artery takeoff, supplies the medial inferior temporal lobe and the medial occipital lobe. Twenty percent of the time, one or both of the right or left P1 segments are atretic and the P2 segment is supplied by the ICA via the posterior communicating artery. As discussed previously, this is referred to as
a “fetal” PCA origin. The majority of ischemic syndromes result from embolism (artery-to-artery or cardiac) and less commonly atherothrombotic disease of the PCA.
P1 segment: Syndromes are related to midbrain, subthalamic, and thalamic signs that vary depending on whether the embolus occludes the top of the basilar area, the right or left PCA proximal segment, or the penetrating artery branches that emerge from the proximal PCA. Top of the basilar artery occlusion results in the devastating syndrome of coma and quadriplegia, resulting from infarction of the reticular activating system and bilateral corticospinal tracts within the midbrain. Branch occlusions cause third nerve palsy and contralateral motor or sensory findings by involvement of the midbrain. The artery of Percheron is a normal anatomic variant in which a single large medial mesencephalic artery supplies both sides of the subthalamus and thalamus and part of the midbrain. Occlusion of this artery results in bilateral ptosis, paralysis of upgaze, and decreased consciousness, caused by involvement of both thalami and bilateral midbrain. When only a single penetrating artery territory is involved, small-vessel lacunar disease results (see “Lacunar or Small-Vessel Disease” later in this chapter).
P2 segment: Syndromes result from involvement of cortical branches to the medial inferior temporal lobe, giving rise to memory loss and delirium, and branches to the medial occipital lobe, giving rise to contralateral homonymous visual field defects. Distal field border zone ischemia of the PCAs and MCAs gives rise to visual impairment syndromes that include inability to recognize faces or pictures or to put items in a picture together to form an object (Balint syndrome).
Small-vessel, lacunar stroke/TIA subtype A lacunar infarct results from an occlusion of a small single penetrating artery arising from the circle of Willis, the middle cerebral stem, the basilar artery, or PCA and is defined as a small noncortical lesion measuring up to 1.5 to 2 cm in diameter. The cause is lipohyalinotic narrowing or occlusion in the mid- or distal part of the artery or atherothrombotic lesion at its origin; embolism is less often the cause. Lacunar strokes account for 25% of all ischemic strokes. These strokes cause recognizable clinical syndromes that evolve over hours to days, and may be preceded by transient symptoms (lacunar TIAs). The location of the ischemia determines the nature and severity of the symptoms.
Recovery occurs often within days, but in some with especially strategically placed infarcts, significant disability is persistent and increasing age is associated with worse prognosis. Lacunar strokes are often asymptomatic but when multiple and recurrent are associated with more widespread white matter disease and an increased risk for cognitive decline and dementia.
The most common lacunar syndromes are the following:
Pure motor hemiparesis is the most common lacunar stroke syndrome. It is usually from an infarct in the posterior limb of the internal capsule, corona radiata, or basis pontis. Less commonly, cerebral peduncle in the midbrain can be involved. The face, arm, leg, foot, and toes are equally paretic or plegic, but with no sensory deficit. The weakness may be intermittent (TIA), progress in a stepwise manner, or appear abruptly.
Pure sensory stroke from an infarct in the ventrolateral thalamus. This type of infarct produces face, arm, and leg sensory involvement with numbness, tingling, and loss of pain and temperature. The patient generally recovers but often is left with an abnormal sensation. On rare occasions, an intolerable pain syndrome with dysesthesia occurs in the involved extremities some months afterward (Dejerine-Roussy syndrome).
Sensorimotor stroke usually results from infarction between the thalamus and internal capsule and presents with contralateral weakness of face, arm, and leg as well as decreased sensation on the same side.
Ataxic hemiparesis from an infarct usually in the basis pontis or the internal capsule. This results in contralateral weakness and ataxia.
Dysarthria—clumsy hand syndrome caused by lacunar infarction of the genu of the internal capsule or, less frequently, the corona radiata or the paramedian rostral pons, with resulting mild contralateral arm ataxia or arm weakness, and dysarthric speech.
Cardioembolic stroke/TIA subtype Cardiomebolic strokes account for approximately 20% of all stroke subtypes and can reach twice this number in the older population. Despite the downtrend in stroke incidence worldwide, cardioembolic stroke incidence has tripled in the last few decades and is estimated to triple again in the next 30 years, possibly due to more extensive
diagnostic evaluation. Many cardiac conditions predispose to stroke occurrence (Table 62-1).
TABLE 62-1 ■ EMBOLIC STROKE CLASSIFICATION
On clinical grounds, it is usually diagnosed when a sudden deficit, reaching a peak soon after its onset, appears in the territory of a large intracranial artery or in one of its major branches, and the extra- or intracranial arterial supply to this ischemic territory zone does not have a significant stenotic or thrombotic occlusive lesion. In some cases, the embolic fragment may be seen, on imaging, to occlude the vessel or a distal branch. In other cases, the suspected embolic material is not visualized because it has already been dissolved by the endogenous fibrinolytic system, but not before significant ischemia has occurred.
The clinical presentations of embolism in the anterior and posterior cerebral circulation are similar to that of artery-to-artery embolism. The nature and severity of the symptoms depend entirely on the location of the embolic fragment occluding the artery and the spared collateral circulation to its cerebral territory. Which intracranial artery or arteries that get occluded will depend on the size of the embolic fragment and the extracranial artery it enters.
The radiological characteristics of an embolic infarction differ according to the size of the embolic material as well as the diameter of the artery affected. When a more proximal large artery is involved (such as the distal ICA or proximal MCA) a large area of ischemia, involving both deep and cortical structures, occurs. When the embolic material is smaller, it lodges in more distal branches and gives rise to the more typical embolic pattern of a cortically based, wedge-shaped ischemic lesion. The clinical suspicion together with the radiological appearance of a distal cortical-subcortical lesion should prompt a more aggressive investigation for a cardioembolic source such as a structural heart or aortic lesion or an atrial arrhythmia, even if clinically occult.
Many different heart conditions predispose to cardioembolic strokes (see Table 62-1), but AF is not only the most prevalent of these conditions but also the one with the best evidence-based treatment strategies. Estimated to affect more than 30 million people worldwide, it is associated with a three- to fivefold risk of stroke. Particularly relevant is its occurrence in the older population where, for individuals older than 65 years, its prevalence increases by 5% per year. Not only is it more prevalent with aging but the risk of stroke associated with it also increases in older adults, a notion emphasized by the fact that the proportion of ischemic strokes attributable to AF increases with age and can be as high as 40% in the oldest old age
groups. The statistics of AF prevalence and stroke risk in the geriatric population should stress the importance of exhaustive search for AF in this age group, sometimes utilizing long-term rhythm monitoring strategies such as prolonged Holter monitoring or implantable rhythm monitoring device. In practical terms, the absence of an obvious etiology for an ischemic stroke, such as a lacunar or large vessel pattern, demands prolonged rhythm monitoring in the older population. With more studies demonstrating the effectiveness of prolonged rhythm monitoring strategies in detecting AF in stroke patients, it is also reasonable to consider such monitoring strategies in the older population, even when an alternative etiology is more immediately apparent.
Other conditions, even though less prevalent than AF, can represent high- risk conditions for embolic strokes and include congestive heart failure, recent myocardial infarction (MI) (both associated with left ventricular [LV] thrombus formation), complex aortic arch atheromas, valvular heart disease, and infective endocarditis.
Patent Foramen Ovale and Embolic Stroke Patent foramen ovale (PFO) is the consequence of failure of complete closure of the atrial septum primum and septum secundum immediately following birth, thereby leaving a communication between the right atrium and left atrium. PFO has been associated with stroke in epidemiologic studies, which indicate it is present in as many as 50% of patients with cryptogenic embolic stroke. Stroke may be more common in those with PFO because the atrial communication provides a channel by which a venous embolism can pass from the right to left side of the heart with the potential for subsequent cerebral embolism (so- called “paradoxical embolism”). Alternately, the PFO may itself be the source of thrombus formation with subsequent embolism. The PFO may be identified by transthoracic echocardiography with injection of agitated saline. Transesophageal echocardiography offers increased sensitivity. TCD ultrasound can also be used to detect the presence of right-to-left shunting by documenting the passage of agitated microbubbles into the cerebral circulation following a peripheral venous injection. It has equal or greater sensitivity than transthoracic echocardiography and can be used to guide the cardiac evaluation.
There is considerable controversy regarding the significance and therapeutic implications of PFO in stroke. PFOs are common and present in approximately 15% to 35% of the general population. After a thorough
diagnostic work-up for stroke etiology, it is often difficult to determine if a PFO is related to the stroke or an “innocent bystander.” If a PFO is identified, the patient should be screened for the presence of deep venous thrombosis (DVT), which would itself be an indication for a period of anticoagulation. PFO is likely more relevant in younger patients with few or no risk factors for stroke than in the older population, where other traditional cardiovascular risk factors make other stroke etiologies more likely.
Cryptogenic stroke In up to 30% of stroke patients, despite an extensive work- up, a causative etiology cannot be found, and the stroke is classified as cryptogenic. A minimally complete etiological work-up should include neuroimaging with CT or MRI, cervical and intracranial vessel imaging, and 24-hour heart rhythm monitoring. The absence of a clear etiology is more frequently seen in younger patients but can be seen in older individuals, even when multiple cardiovascular risk factors are present. Since a large-vessel stenotic atherosclerotic lesion and a lacunar pattern must be excluded, the majority of cryptogenic strokes will fall under the category of embolic stroke of undetermined source (ESUS). This most recent stroke subclassification acknowledges that many times cryptogenic strokes will appear indistinguishable from strokes from a known embolic source. Its radiological and clinical characteristics will be of an embolic looking stroke (above) with no obvious cardiac source of embolism. This entity is important not only because it is relatively frequent but because an even more extensive work-up in this group of patients often reveals an underlying cardioembolic source, namely atrial fibrillation. Studies have shown that in this group, after more prolonged heart rhythm monitoring (at least 6 months) with implantable monitoring devices, the prevalence of paroxysmal AF can be as high as 30%. Such results have obvious clinical significance given the superior protection that long-term oral anticoagulation offers for secondary prevention, when compared to antiplatelet therapy.
For the ESUS patients with no AF detected after months of continuous rhythm monitoring, the possibility that long-term anticoagulation might be more effective than antiplatelet therapy is being investigated by current studies where structural and electrical abnormalities of the left atrium (atrial cardiomyopathy) are considered as potential markers of increased embolic risk. Like AF, such a condition is more prevalent in older adults and likely predisposes to strokes. Results from such studies will help determine the best secondary prevention strategies for this group.
Othe r causes of cerebral infarction Although other causes of cerebral infarction account for only 5% of all ischemic strokes, they are extremely important because their precise pathophysiologic diagnosis can lead to effective treatment.
Dissection of the cervical cerebral arteries is the most common cause of stroke in this category subtype. A dissection is a tear in the arterial wall leading to intramural hematoma formation. A subintimal dissection occurs when an intramural hematoma is between the intima and the media layers and may lead to arterial narrowing and thrombus formation. A subadventitial dissection occurs when an intramural hematoma is between the media and the adventitia layers, and may lead to the formation of a dissecting aneurysm (sometimes referred to as pseudoaneurysm).
ICA dissections usually occur 2 cm distal to the carotid bifurcation near the base of the skull. VA dissections occur in the cervical transverse foramen but more commonly at the base of the skull, at the V3-V4 segments.
Intracranial dissections are less common than cervical dissections and can lead to ischemic stroke and bleeding in the subarachnoid space. Trauma, either severe or trivial, are common causes of dissection, but Valsalva maneuvers associated with coughing and vomiting, weightlifting, contact sports, or chiropractic manipulations are other recognized associations.
Spontaneous dissections, without a clear antecedent cause, are not uncommon. Dissections occur at all ages but tend to be less frequent in older individuals.
The most common symptom of arterial dissection is headache or neck pain. The clinical hallmark of carotid dissection is an ipsilateral Horner syndrome usually with unilateral cervical pain. Artery-to-artery embolism or low-flow syndromes occur, just as they do for atherothrombotic disease of the ICA. Similar pathophysiologic circumstances exist for VA dissections; with them, cervical spine pain and occipital headache are the suggestive symptoms. The most common site of infarction in vertebral dissection is the lateral medulla, with or without concomitant involvement of the PICA territory in the cerebellum. Dizziness, ataxia of gait, hiccups, nausea and vomiting with a unilateral Horner syndrome, and ipsilateral face numbness with contralateral body numbness are the hallmark symptoms (Wallenberg syndrome). Occasionally, diplopia and a hoarse voice are evident. Artery-to- artery emboli arising from a thrombus at the site of dissection in the VA may migrate to distal branches of the basilar artery, producing brain stem,
cerebellar, or thalamic infarction. Sometimes spontaneous dissection can be seen in the context of underlying fibromuscular dysplasia (FMD), a noninflammatory, nonatherosclerotic vascular disease that can result in arterial stenosis, occlusion, aneurysm, or dissection. FMD most frequently involves the renal artery but can also involve the extracranial carotid and VA. It is usually seen in younger patients.
Vasculitis is inflammation of the blood vessels. There are numerous causes such as infection, malignancy, immune diseases, and drugs. Infectious vasculitis from bacterial or syphilitic infections is no longer a common cause of cerebral thrombosis. Infectious arteritis may rarely follow infection with varicella zoster virus (VZV), particularly if the ophthalmic division is involved, and is usually associated with involvement of larger caliber vessels. Cerebral arteritis may rarely accompany certain systemic vasculitides, including polyarteritis nodosa or Wegener granulomatosis.
Necrotizing granulomatous arteritis, or primary angiitis of the central nervous system, involves the distal small branches (< 2 mm diameter) of the main intracranial arteries and produces small ischemic infarcts in the brain, optic nerve, and spinal cord. By definition, there is no systemic involvement. This rare disease is often relentlessly progressive. Presenting symptoms are varied and nonspecific, but headache is the most frequent complaint. MRI is frequently abnormal, but the findings are nonspecific. Findings include cortical and subcortical infarcts, parenchymal and leptomeningeal enhancement, subarachnoid and intraparenchymal hemorrhage, or mass lesions. Angiography can demonstrate areas of stenosis alternating with normal or dilated segments but both sensitivity and specificity are not ideal. Diagnosis of primary angiitis of the central nervous system can be difficult and typically requires either angiographic criteria be met or a tissue diagnosis.
Giant cell arteritis is a relatively common affliction of older persons, in which the external carotid system—particularly the temporal arteries—is the site of a subacute granulomatous infiltration with an exudate of lymphocytes, monocytes, neutrophilic leukocytes, and giant cells. The etiology of giant cell arteritis is not entirely clear, but it is likely secondary to multiple genetic and environmental factors with numerous infectious etiologies having been suspected in the past, including VZV. The pathogenesis is believed to involve an initial antigen-driven event that leads to the recruitment of T cells with
subsequent inflammation potentially causing vascular damage and intimal hyperplasia, the result of which can cause stenosis or occlusion.
Clinically, the chief complaint is headache or jaw claudication. Systemic manifestations, through the release of inflammatory cytokines, can include fever, loss of weight, malaise, and polymyalgia rheumatica. Blindness of one or both eyes results from occlusion of the branches of the ophthalmic artery. Occasionally, an ophthalmoplegia caused by involvement of extrinsic ocular muscles occurs. In some cases, an arteritis of the aorta and its major branches, including the carotids, subclavian, coronary, and femoral arteries, has been found at postmortem examination. Significant inflammatory involvement of the intracranial arteries is rare, but strokes occur occasionally due to occlusion of the internal carotid, middle cerebral, or vertebral-basilar system with a predilection for affecting the latter.
Per the American College of Rheumatology, the diagnosis is made if three of the five following criteria are met: (1) age of onset is greater than 50, (2) new headache, (3) temporal artery abnormality (tenderness to palpation or decreased pulsation unrelated to arteriosclerosis of the cervical arteries), (4) elevated erythrocyte sedimentation rate (ESR) (≥ 50 mm/h), and (5) abnormal findings on temporal artery biopsy. The hallmark neurologic symptom is transient monocular blindness, but ischemic stroke can rarely be the presenting feature. In older patients with new-onset headaches, especially if associated with visual complaints and/or ischemic stroke, high clinical suspicion warrants obtaining urgent ESR and C-reactive protein (CRP) levels.
Moyamoya disease is a poorly understood nonatherosclerotic occlusive disorder involving the progressive stenosis of large intracranial arteries, especially the distal ICA, the stem of the MCA, and the ACA. Even though being a disease predominantly seen in children and young adults, it can rarely be seen in adults older than 60 years.
Reversible cerebral vasoconstriction syndrome (RCVS) is a reversible angiopathy that presents with severe “thunderclap” headache, often recurrent, and fluctuating neurologic symptoms and signs as well as angiographic findings of alternating areas of constriction and dilation. It affects predominantly women in the fourth and fifth decades but has been reported in patients in their sixties and seventies. Brain imaging can be normal or show cerebral infarction, hemorrhage, or transient cerebral edema. The etiology is unknown. Eclampsia, the postpartum period, head injury, migraine, and
sympathomimetic and SSRIs have all been associated with this entity. Conventional angiography is the gold standard for establishing the diagnosis although newer imaging techniques such as CTA and MRA are proving useful. Cerebrospinal fluid (CSF) is normal in most cases and this is one of the characteristics that helps distinguish this entity from primary angiitis of the central nervous system. The disease is self-limited, and, with adequate supportive care and withdrawal of the offending agent, partial or complete recovery occurs in most cases. The headaches and arterial vasoconstriction usually resolve within a few weeks.
Cerebral autosomal dominant arteriopathy with subcortical infarcts and leukoencephalopathy (CADASIL) is a rare primary arteriolopathy affecting small penetrating vessels to the basal ganglia, thalamus, and cerebral white matter. The disease is caused by a variety of mutations in the Notch3 gene. The clinical presentation is varied, but there are five primary symptoms including late-onset migraine headaches with aura, subcortical ischemic strokes, mood disturbances, cognitive impairment, and apathy. It is part of the differential diagnosis of progressive cognitive impairment and diffuse matter after disease but is rarely seen in older adults.
Binswanger subcortical leukoencephalopathy, a rare cause of dementia, is a syndrome seen with advanced small-vessel hypertensive disease.
Diffuse vascular lesions are seen in the subcortical layers of the cerebral hemispheres and there is widespread white matter demyelination. It usually affects individuals around 50 years and is characterized by fluctuations in mood and consciousness and perhaps even seizures. Dementia may be an early and prominent symptom preceded or accompanied by symptoms and signs of one or more small-vessel infarctions. Confusional states, memory difficulties, and abulia are prominent, and are sometimes accompanied by focal cortical-subcortical deficits such as aphasia, apraxia, or neglect. Focal neurologic deficits or uni- or bilateral limb signs may lead to a pseudobulbar state and gait difficulties are prominent. There is often evidence of vascular compromise in other body districts. Binswanger disease must be differentiated from disorders with prominent subcortical white matter involvement on CT or MRI such as hypertensive encephalopathy, cerebral amyloid angiopathy (CAA), and CADASIL.
Hematologic diseases such as acute and chronic leukemia, essential and secondary thrombocytosis, thrombocytopenia, and sickle cell disease can be complicated by ischemic or hemorrhagic stroke. Acquired hypercoagulable
states, mainly with the detection of antiphospholipid antibodies can be part of the investigation of cryptogenic stroke in older patients while other hereditary conditions are more relevant in the investigation of stroke in the young. Cancer also increases the risk of hypercoagulability and hence ischemic stroke, and should be considered in older patients, especially when traditional stroke risk factors are absent.
Stroke can also occur during the course of a severe attack of migraine, especially migraine with aura (“migraine-induced stroke”). It is less often seen in the older population and is usually a diagnosis of exclusion, since other more common etiologies in that age group have to be considered and ruled out first.
EVALUATION
History, Physical Examination, and Initial Imaging Evaluation
The initial history and physical examination are the hallmark of the evaluation to obtain the pathophysiologic stroke or TIA subtype diagnosis. Urgent brain and vascular imaging must be performed in all patients.
Noncontrast head CT, for most centers, is the initial imaging evaluation as it allows rapid exclusion of hemorrhage. While not sensitive enough to detect acute ischemic changes, especially small areas of infarction, in the first 12 hours after stroke, a head CT scan can answer the main questions in the acute setting such as:
Is there hemorrhage?
Is there an alternative diagnosis evident on initial imaging (tumor, abscess)?
Is the stroke already evident on initial imaging, and if so, how extensive is it?
Regarding the last question, part of the evaluation of the stroke
characteristics on initial CT include estimating the extent of ischemic involvement mostly by determining areas of hypoattenuation and loss of gray- white matter differentiation. The ASPECTS scale (Alberta Stroke Program Early CT Score) is used to estimate extent of early ischemic changes and has been strongly related with prognosis. It divides the MCA-distribution territory in 10 different areas and subtracts 1 point for each area affected. A score < 6 is generally associated with a worse prognosis and is often used as
a cutoff for guiding acute interventional therapies. However, it is not a strict rule and individual case-by-case decisions are still warranted when considering acute revascularization strategies in daily practice. Besides analysis of parenchymal involvement on noncontrast CT, acute vessel imaging with CT angiogram of the head and neck should be part of the acute stroke evaluation. Knowing the anatomy of intra- and extracranial vessels as well as presence and location of a LVO is critical information in the acute decision-making process. CT Perfusion (CTP) is also part of the same radiological armamentarium, and many times provides key information on tissue viability, especially in patients who present in an extended time window from stroke onset and might still be candidates for acute intervention (MT). Details of this modality can be found elsewhere but it consists of a CT scan modality in which a contrast bolus is tracked, by serial scanning, as it passes through the tissue and the results deconvoluted into a map of perfusion times. In practical terms, the critical information one looks for when obtaining a CTP in an acute stroke is whether there is still viable tissue (ischemic penumbra) that can potentially be saved by recanalization strategies (see below). For the reasons above, the information provided by CT or CTA (and in some cases CTP) is usually fast and reliable and in most cases will suffice to help determine eligibility for acute treatment, either intravenous thrombolysis (IVT) or MT (see below).
Despite a greater accuracy for detecting acute ischemic stroke changes, MRI of the head is less practical in the hyperacute setting and is usually reserved for completion of stroke evaluation at a later time or when a challenging case demands more information for the acute decision-making process. In patients who are not candidates for acute intervention, a more complete diagnostic work-up should be initiated. MRI of the brain with diffusion-weighted imaging (DWI) is the best way of identifying cerebral infarcts acutely and accurately. Susceptibility sequences identify subacute hemorrhagic infarcts and small areas of chronic hemorrhage (microbleeds) that might be associated with small-vessel diseases such as amyloid angiopathy or hypertensive vasculopathy. MRI is far more sensitive than CT scan not only for detecting infarction at different stages but also for identifying other pathologies that can mimic an acute stroke clinically such as brain tumor, abscess, and demyelinating/inflammatory lesions. MRI also has a high sensitivity and specificity for the detection of hemorrhage.
Determining the location and pattern of ischemia on MRI is the first step in determining the most likely stroke etiology.
Imaging of the cervical and cerebral large arterial system with CTA, MRA, or carotid duplex and TCD ultrasound are modalities used for determining the vessel anatomy and its associated pathology. If done in the acute setting, CTA of the head and neck is usually sufficient and other modalities can be reserved for inconclusive studies or when administration of intravenous iodinated contrast is to be avoided. MRA of the head and neck does not offer the same anatomical resolution of the arterial system as conventional transfemoral angiography or CT angiography. MR perfusion- weighted imaging can, in an analogous way to CT perfusion imaging, identify areas of perfusion delay that indicate tissue at risk of progression to infarction.
Neurosonology tests include carotid duplex ultrasound and TCD assessment of the extra- and intracranial arterial system, respectively. Carotid duplex Doppler assesses flow in the CCA, its bifurcation, and the internal and external carotid arteries. In addition, flow in the middle portion of the VA is typically assessed to identify more proximal or distal obstructive lesions. TCD allows assessment of flow in the intracranial carotid artery and the ophthalmic artery through the transorbital approach. The transtemporal approach permits assessment of flow in the middle, anterior, and PCA stems. The occipital foramen magnum approach allows determination of flow in the distal VA and in the proximal and mid sections of the basilar artery. These tests have the advantage of being simple, noninvasive, and portable.
Neurosonology tests can be used to follow the progression of the arterial pathology subacutely and chronically. Detection of microbubbles after injection of agitated saline is also helpful in detecting evidence of right to left shunt as seen in the presence of PFO. Prolonged monitoring with emboli detection and study of cerebral vasoreactivity are examples of the clinical utility of TCD in the etiological evaluation of ischemic strokes. The quantification of carotid artery atheromatous disease and its progression is an especially important use of carotid duplex combined with TCD.
Laboratory Evaluation
After completion of all acute neuroimaging modalities, the initial blood work should include the standard complete blood count, basic coagulation studies, and general chemistry examination. Checking a fasting lipid panel and
screening for diabetes with hemoglobin A1c should also be performed for stroke risk factor modification. Special clotting studies are not essential, but are often useful when a hypercoagulable state is suspected, mostly in younger patients. In the majority of older patients an extensive hypercoagulable panel is usually not warranted. The screen of thyroid function with a thyroid- stimulating hormone has helped us in identifying clinically inapparent hyperthyroidism, a condition that may be associated with AF or hyperlipidemia. ESR should be considered in the older population, especially when other risk factors are missing and there is suspicion of GCA or endocarditis as the cause of the stroke.
Cardiac evaluation In addition to obtaining a baseline electrocardiogram, cardiac echocardiography and cardiac rhythm monitoring should be performed, especially when a cardiac embolism is considered in the differential diagnosis. In practical terms, unless there is an obvious causative etiology identified (lacunar or large artery atherosclerosis), given the high prevalence of AF in the older population, cardiac rhythm monitoring should always be considered in this patient population. Outpatient prolonged cardiac rhythm monitoring, ideally with an implantable loop recorder device, is very helpful if AF or other cardiogenic arrhythmias are considered and is now standard of care to look for AF in cases of suspected cardiac embolism. The suggested duration of monitoring is usually for 6 months at least, or until paroxysmal AF is identified.
Therapeutic Strategies for Acute Ischemic Stroke
With the acute onset of neurologic deficits secondary to cerebral ischemia, the goal is to facilitate or reestablish blood flow to the ischemic zone. The therapeutic strategy should always be guided by the ischemic stroke pathophysiology and mechanism, whether presumed (history and physical examination) or confirmed (history, physical examination, and diagnostic data including neuroimaging, echocardiography, heart rhythm monitoring, and laboratory testing). The diagnosis should include not only the ischemic stroke or TIA subtypes noted earlier, but also the presence or absence of pathology in the parent vessel supplying the ischemic zone and the extent of the spared collateral flow to it. The time of onset to treatment determines the therapeutic options available, which typically include (1) acute reperfusion therapies,
(2) early stroke prevention strategies, and (3) long-term stroke prevention strategies.
After the pathophysiologic diagnosis has been determined and the patient has been evaluated for therapies designed to facilitate or reestablish cerebral perfusion and prevent subsequent strokes, further management efforts are directed at preventing common complications, such as DVT and aspiration pneumonia, and assisting with neurologic recovery through rehabilitation.
Acute reperfusion the rapies Within the first few hours of cerebral ischemia, rapid restoration of blood flow to the affected area is the cornerstone of acute stroke treatment, and this goal is achieved by either pharmacological treatment with IVT or mechanical clot removal via endovascular thrombectomy (EVT) or MT (see Figure 62-3).
FIGURE 62-3. Algorithm for acute stroke treatment strategies.
Intravenous Thrombolysis For patients in whom treatment can be initiated within 3 hours of symptom onset, IVT with alteplase, at a dose of 0.9 mg/kg with 10% given as an immediate bolus and the remainder infused over 1 hour, has been shown to reduce stroke-related disability. In the pivotal National Institute of Neurological Disorders and Stroke (NINDS) rt-PA Stroke trial, patients treated with IVT had a 12% greater absolute chance of a good outcome,
defined as minimal or no stroke-related disability. This benefit was present despite a 6% risk of symptomatic intracranial hemorrhage (sICH) in the IVT group, of which more than half were fatal. Posthoc analysis of the trial did not show statistical evidence that the treatment effect varied by presumed stroke subtype, although the power to detect differences was modest.
Subsequently, a trial of IVT given to patients with acute ischemic stroke within 3 to 4.5 hours showed a significant benefit of IVT as compared to placebo. In this study, the European Cooperative Acute Stroke Study (ECASS) III, the mean time to administration of rt-PA was 3 hours 59 minutes. Patients receiving IVT had a 7.2% absolute chance of a better outcome, defined as a modified Rankin score of 0 or 1 at 90 days. Mortality did not differ between the two groups. The rate of any intracranial hemorrhage was, as expected, higher in the IVT group (27% vs 17.6%) as was the rate of sICH (2.4% vs 0.3%), though lower than in the NINDS rt-PA trial. It is important to note that in ECASS III patients were excluded if older than age 80, if they had a severe stroke, or if they had a history of previous stroke and diabetes mellitus. However, updated AHA/ASA guidelines for acute stroke treatment cite a careful assessment of the published evidence to indicate that these exclusion criteria may not always be justified in clinical practice and recommend a detailed evaluation of individual risks versus benefits of IVT among patients presenting in the 3- to 4.5-hour time window. Further, both in NINDS and in ECASS III, the earlier the patients received IVT, the better were their outcomes. Therefore, it is of utmost importance to initiate thrombolytic therapy in patients with acute ischemic stroke as soon as possible, that is, aiming for shorter “door-to-needle” times.
Extended-window IVT: About 20% strokes occur during sleep, but a witnessed last-known-well (LKW) time of > 4.5 hours in these patients would disqualify them from receiving IVT due to the uncertainty regarding the true onset of stroke symptoms. Several studies have shown that a significant proportion of these patients suffer stroke symptoms in the early morning hours just before waking up because of the circadian fluctuations in blood pressure, heart rate, hemostatic processes, and occurrence of AF. Various neuroimaging modalities such as MRI (DWI-FLAIR Mismatch), CT perfusion, and MRI perfusion studies have been recently shown to reliably identify patients who may benefit from acute reperfusion therapies. The concept of DWI-FLAIR mismatch, that is, presence of an acute ischemic lesion on DWI in the absence of a hyperintense lesion on FLAIR serving as a
surrogate marker of time elapsed since the actual onset of stroke symptoms has been recently evaluated in a randomized trial of MRI-guided intravenous thrombolysis in stroke patients with unknown time of symptom onset (WAKE-UP). The basis of this clinical trial was a proposition supported by previous studies that DWI-FLAIR mismatch on an MRI was reliably able to identify a majority of the patients whose stroke symptoms started within the preceding 4.5 hours. The trial included patients who presented with a new stroke, were LKW more than 4.5 hours earlier, had no thrombectomy planned, fulfilled the prespecified imaging criteria (DWI-FLAIR mismatch), and in whom treatment with IV alteplase could be initiated within 4.5 hours of symptom recognition. The trial was terminated prematurely owing to cessation of funding after the enrolment of 503 out of 800 patients and showed an excellent outcome in 53.3% patients receiving MRI-guided thrombolysis versus 41.8% in the control arm (p = 0.02). Alteplase was associated with a nonsignificantly higher risk of sICH (2% vs 0.4%, p = 0.15) and a nonsignificantly higher mortality at 90 days (4.1% vs 1.2%). Of note, the majority of the patients treated had mild to moderately disabling stroke with a median NIHSS of 6.
In acute ischemic stroke, there is a core of irreversibly damaged tissue surrounded by an ischemic penumbra representing potentially salvageable tissue, provided normal blood circulation within that tissue is rapidly restored (discussed later in this section). Studies have shown that a CT- perfusion or an MRI-perfusion study is able to identify the core and penumbra to a reliable extent, often represented as core/perfusion mismatch. The Thrombolysis Guided by Perfusion Imaging up to 9 Hours after Onset of Stroke (EXTEND) trial compared alteplase with placebo in patients presenting between 4.5 and 9 hours after stroke onset, or after awakening with stroke (if within 9 hours from the midpoint of sleep), using predetermined CT or MRI core/perfusion mismatch criteria to select patients. After 225 of a planned 310 patients had been enrolled, the study was terminated because of a loss of equipoise after the publication of the WAKE-UP trial. This trial showed that alteplase was associated with an excellent 90-day outcome in 35.4% patients as compared to 29.5% patients receiving placebo (adjusted odds ratio 1.44, 95% CI 1.01–2.06, p = 0.04).
The risk of sICH was higher in the alteplase group, 6.2% versus 0.9% (adjusted risk ratio 7.22, 95% CI 0.97–53.53, p = 0.053). The 90-day mortality was similar between the alteplase (11.7%) and placebo (8.9%)
groups (adjusted risk ratio 1.17; 95% CI 0.57–2.40; p = 0.67). The publication of these trials among other studies has contributed to the expansion of IVT eligibility in a subset of acute stroke patients in whom the time they were LKW was either unknown or more than 4.5 hours prior to waking up with stroke symptoms, depending upon the center they are treated at and the resources available at that center. One limitation of the MRI-based approach is that it requires immediate access to an MRI scanner and collateral information from the patient/family members to establish MRI safety (absence of ICD/pacemakers or metallic prosthesis in the patient’s body). On the other hand, a CT-perfusion-based approach requires access to an automated software that enables rapid calculation of ischemic core and penumbra estimates. Moreover, in both these trials, there were a substantial proportion of patients with a LVO who may have qualified for MT based on the updates from two novel endovascular trials that were subsequently published (discussed later in this section). These limitations have restricted the widespread adoption of these approaches depending upon the region, access to care, and resources available, but they may remain reasonable options for a select subgroup of acute stroke patients. The 2019 updates for the early management of acute ischemic stroke published by the AHA/ASA suggest the MRI-based approach (DWI-FLAIR mismatch) for IVT as a reasonable choice for a carefully selected group of acute ischemic stroke patients in the extended time window.
Because of the risks of hemorrhage, the decision to administer IVT should be based on an individual assessment of the benefits and risks for the specific individual, with careful attention to the treatment inclusion and exclusion criteria. Patients with acute disabling neurological deficits without hemorrhage or established infarct on initial head CT should be considered for IVT. Exclusion criteria must be reviewed carefully prior to IV rt-PA administration. An important concept to understand is the disabling nature of the neurological symptoms at presentation. A phase IIIb, double-blind, multicenter study to evaluate the efficacy and safety of alteplase in patients with mild stroke, rapidly improving symptoms, and minor neurologic deficits (the PRISMS trial) tested the efficacy of IV alteplase versus aspirin for emergent stroke with presenting symptoms that were deemed to be nondisabling in nature by the treating physician. Of a planned 948 patients, the trial was able to recruit 313 patients and failed to show a benefit of IV alteplase over aspirin, and consequently the current AHA/ASA guidelines
recommend against IVT administration among patients who do not have presenting neurological deficits that would interfere with their daily lives. There has been concern about treatment of older adults because age may be a risk factor for IVT-related hemorrhage; however, the NINDS trial data show that this increased risk does not outweigh the potential benefit in older persons.
Although the rate of sICH after IVT is relatively low, it remains the most feared and sometimes fatal complication of this treatment. Rapid identification and administration of procoagulant factors such as cryoprecipitate or prothrombin complex concentrates (PCC) or tranexamic acid (or a combination of these aforementioned options) along with the provision of neurosurgical and hematological consultations under the care of experienced neurocritical care teams form the mainstay of management of sICH after IVT administration.
More recently, tenecteplase (TNK) instead of alteplase as the thrombolytic agent in acute ischemic stroke has been gaining some traction among the vascular neurology community. TNK is a recombinant tissue plasminogen activator like alteplase but exhibits a higher degree of fibrin specificity and a longer half-life that allows for a single bolus dose administration. Preliminary studies have shown that it is at least comparable, and in some situations, could be more effective than alteplase for acute ischemic stroke. Consequently, many centers worldwide are beginning to offer TNK as an alternative to alteplase as more evidence is awaited.
Mechanical thrombe ctomy EVT with thrombolytics/thrombus retrieval devices had been studied over the past two decades over many studies without much success in clinical outcomes. But, in 2015, several pivotal randomized clinical trials established an overwhelming benefit of EVT/MT in treatment of patients with acute ischemic stroke and a LVO who fulfilled certain criteria, establishing this treatment as a standard-of-care for this patient population. The reasons for such tremendous success of the newer trials were thought to be related to their better patient selection criteria and utilization of more effective reperfusion devices. The exact details differ among these trials, but overall, there were many common features that contributed to the success of these studies and EVT/MT, in general. First, all these trials required the presence of a LVO defined as an occlusion of the intracranial segment of the ICA or proximal segment of the middle cerebral artery (MCA), denoted by M1. The more distal blood vessels are denoted by
M2, M3, and so on. Very few patients harbored an occlusion of M2 in these trials and none had more distal occlusions. Second, all trials except MR CLEAN (Multicenter Randomized Clinical trial of Endovascular treatment for Acute ischemic stroke in the Netherlands) required evidence of absence of severe ischemic changes in the affected brain tissue on a noncontrast CT scan represented by an Alberta Stroke Program Early CT Score (ASPECTS), and although MR CLEAN did not have these requirements in the study entry criteria, it recruited very few patients who had evidence of advanced ischemic changes. Third, all trials encouraged rapid attempt at recanalization, and while most of the trials implemented 0- to 6-hour time window since LKW for study entry, ESCAPE (endovascular treatment for small core and anterior circulation proximal occlusion with emphasis on minimizing CT to recanalization times) and REVASCAT (randomized trial of revascularization with Solitaire FR device versus best medical therapy in the treatment of acute stroke due to anterior circulation LVO presenting within 8 hours of symptom onset) proved the benefit of MT in patients up to 8 hours from symptom onset. Fourth, for the majority of the patients, MT was carried out using the second-generation stent retriever devices. In this procedure, a catheter is advanced into an artery, and using fluoroscopic guidance, a stent retriever is inserted into the catheter. The stent reaches past the clot, expands to stretch the walls of the artery, and is retrieved, removing the clot. Finally, these trials required the patients to have decent baseline level of functioning for them to be able to participate. This allowed for adequate participation in the rehabilitation therapies following the thrombectomy procedure and promoted neurological recovery among the treated patients. A meta-analysis combining data from 1287 patients from five of these pivotal trials showed that the rate of successful recanalization among patients undergoing MT was 71%. Patients who underwent MT had a higher likelihood of achieving good 90-day clinical outcome with return to functional independence, 46% versus only 26.5% in the control group. Finally, the number needed to treat for one patient to have reduced disability was found to be 2.6, thus confirming the highly effective nature of MT as a treatment for anterior circulation LVO among acute ischemic stroke patients with disabling symptoms and who were LKW within 8 hours. Moreover, this benefit persisted across all age groups, even octogenarians, and although the overall mortality remained lower in the intervention group (15.3% versus 18.9% in the control group), this difference was not shown to be statistically significant.
The penumbra concept for patient selection: An intracranial occlusion causing complete cessation of blood flow to brain tissue it supplies renders that tissue at risk of irreversible damage. The larger the area supplied by this occluded vessel, the larger the tissue is at risk and while intracranial and extracranial collateral circulation may help compensate for some of the lack of blood flow, as time passes, even the collateral circulation runs the risk of failing, thereby causing irreversible tissue injury. Moreover, the normal average blood flow is about 50 mL/100 g/min. When this flow drops to 20 to 30 mL/100 g/min, there is a selective loss of neuronal functions with electrical failure and inability to conduct the electrical impulses, but the cell structure is still intact. However, when this flow reduces below a critical level between 10 and 20 mL/100 g/min, there is loss of ATP function of the cell resulting in cell membrane damage, swelling, and subsequent cell death. Because of a close interplay of these phenomena, an intracranial occlusion leads to the formation of three zones of brain injury (see Figure 62-4): the ischemic core (tissue irreversibly injured), the ischemic penumbra (tissue with very slow blood flow, with cessation of neuronal functions but intact cell structure allowing for slow functional recovery if adequate blood flow is rapidly restored), and the zone of benign oligemia (tissue with milder reduction in the blood flow that does not lead to tissue at risk). Over the last few years, neuroimaging techniques with CT perfusion and MRI perfusion have been developed that can rapidly provide a reliable estimate of these zones of ischemia, penumbra, and benign oligemia. Although both the modalities have their pros and cons, they have been increasingly used in acute stroke imaging and form the basis of DAWN (DWI or CTP assessment with clinical mismatch in the triage of wake-up and late presenting strokes undergoing neurointervention) and DEFUSE 3 (endovascular therapy following imaging evaluation for ischemic stroke 3) trials published in 2018, which allowed for expansion of MT in patients who presented with acute disabling stroke symptoms beyond 6 hours of LKW time and had a salvageable penumbra measured by the automated quantitative perfusion on CT perfusion or an MR perfusion study (see Figure 62-5). There were some differences in the clinical and imaging selection between the two trials, but essentially DAWN enrolled patients between 6 and 24 hours of their LKW time and DEFUSE 3 enrolled patients within 6 to 16 hours of their LKW time. DAWN enrolled 207 patients and showed the largest treatment effect size in terms of functional outcome with 49% of the patients undergoing MT
achieving functional independence at 3 months versus only 13% patients in the medical management arm. Similarly, in DEFUSE, enrollment was terminated early for efficacy after 182 patients were randomized and showed the rate of return to functional independence in 41% patients undergoing MT versus only 15% in the medical management arm. The findings from these trials have shifted the paradigm of stroke treatment from being a strictly time- based approach toward a more inclusive salvageable tissue-based approach and expanded its eligibility to a much broader population. Like in the early time window, the efficacy for MT has been unequivocally accepted for all adult age groups, including octogenarians. Although these trials established the benefit of MT among patients in the extended time window, it is very important to understand that the patients who fulfill these criteria are relatively few and the survivability of the brain tissue depends upon the collateral circulation status of a patient. With time, this collateral circulation may fail, although definite timepoints at which this may happen are almost impossible to predict. Thus, even when dealing with a patient with favorable core/perfusion mismatch showing salvageable penumbra, it is crucial that these patients receive MT without any additional delay to harness the maximal benefit from reperfusion.
FIGURE 62-4. Diagram of a cerebral hemisphere, lateral aspect, indicating the zones of ischemia (red), penumbra (green), and benign oligemia (yellow) that form in an acute ischemic stroke due to an intracranial LVO.
FIGURE 62-5. An example of a CT perfusion obtained on a 73-year-old man who went to bed at 11 pm and woke up at 7 am with left-sided weakness, right gaze deviation, and left-sided homonymous hemianopia and neglect. His signs and symptoms were consistent with right MCA syndrome. The CTP shows an area of critically reduced cerebral blood flow (marked in red color) that represents possible core ischemic infarct surrounded by an area of hypoperfused tissue (marked in green color) that represents possible ischemic penumbra. The CT angiogram showed an intracranial M1 segment occlusion, which qualified the patient for MT. The patient was immediately taken to a neuroangiosuite, and an MT with stent-retriever device was performed. The patient’s symptoms significantly improved on day 3 of his admission. His only neurological deficit was a mild left-sided facial droop.
Post-reperfusion management: After the completion of the acute reperfusion therapies, patients should be admitted to a dedicated neurological care unit to monitor for neurological deterioration and cardiovascular or systemic complications and to complete investigations to determine the ischemic stroke mechanism. Upon successful reperfusion, the goal is to maintain hemodynamic stability, targeting gradual normotension. Severe resistant hypertension has been shown to increase the risk of ICH and so, after IVT, the widely accepted routine practice is to maintain blood pressure ranges < 180/105 mm Hg for at least 24 hours while avoiding rapid fluctuations, followed by a 24-hour CT scan or an MRI of the brain to rule out intracranial hemorrhage. To achieve this goal, intravenous antihypertensive medications such as labetalol, as needed, or continuous
infusions of medications such as nicardipine are generally used. Typically, until the 24-hour CT/MRI scan rules out an ICH, pharmacological DVT prophylaxis and antithrombotic agents are avoided unless there is a strong indication to do so. There are no clear guidelines regarding optimal blood pressure goals after successful MT but depending on the degree of reperfusion, targeting normotension is a widely accepted routine practice until there is more robust data available. Similarly, hyperglycemia has been shown to be associated with poor stroke outcomes, and the patients admitted to the hospital who have severe hyperglycemia should be on standardized insulin protocols to treat and closely monitor their regular blood sugars. The Stroke Hyperglycemia Insulin Network Effort (SHINE) randomized controlled trial enrolled 1151 acute ischemic stroke patients and randomized nearly half of them to an interventional protocol with IV insulin to maintain blood sugar between 80 and 130 mg/dL and did not find any benefit in this group of patients when compared to the other half of the patients in the group that received standard care with subcutaneous insulin to maintain blood glucose between 80 and 179 mg/dL. Subsequently, intensive lowering of blood sugar is not currently recommended in routine practice caring for acute stroke patients. For patients who do not receive acute reperfusion therapies, maintaining cerebral perfusion pressures is critical, and the blood pressure is allowed to autoregulate with permissible systolic blood pressure (SBP) up to 220 mm Hg in the first 24 to 48 hours, subsequently aiming to gently lower blood pressure (with SBP goals of 160–180 mm Hg) within 24 to 72 hours of ischemic stroke onset. Close attention must be paid to the neurologic examination, as abrupt blood pressure drops may lead to hypoperfusion of an arterial territory at risk of infarction.
Common medical complications of stroke include aspiration pneumonia, DVT, and pulmonary embolism. When indwelling bladder catheters are used, urinary tract infection is an additional concern. All stroke patients with evidence of dysarthria, aphasia, cough, or aspiration should have a formal swallowing evaluation before being allowed to take food, liquid, or medicines by mouth. Some patients may require placement of a percutaneous endoscopic gastrostomy (PEG) feeding tube, which, in most cases, can be removed in several months when the ability to swallow improves. DVT can be prevented by the administration of subcutaneous heparin or the use of pneumatic compression boots, followed by ambulation as soon as possible.
Attention should be made to avoid hyperthermia, which may lead to poorer stroke outcomes.
Early stroke prevention strategies Aspirin is the most common antithrombotic agent used for secondary stroke prevention after a stroke or a TIA. Typically, unless there are contraindications or excessive risks of bleeding, aspirin 50 to 325 mg/day is initiated as soon as possible after an ischemic stroke or a TIA. The risk of recurrent stroke after an ischemic stroke or a TIA is the highest in the first few days/weeks after the index event, thought to be as high as 10% within a week after a TIA or minor stroke, depending on the underlying stroke etiology. Although aspirin has been shown to be beneficial for long-term secondary stroke prevention, the most benefit is obtained in the first few days after the index TIA/stroke. Caution should be exercised when prescribing aspirin to patients who have known history of gastrointestinal ulceration/bleeding, although aspirin in 50 to 325 mg/day doses has been shown to be tolerated well in general. Clopidogrel is also used as a first-line agent for secondary stroke prevention. In a randomized, blinded trial of clopidogrel versus aspirin in patients at risk of ischemic events (CAPRIE) that randomized patients with a recent stroke, MI, or peripheral arterial disease to daily treatment with clopidogrel or aspirin, clopidogrel was associated with a modest reduction in overall primary outcome of stroke, MI, or vascular death, but in subgroup analysis, the outcomes between the two groups did not differ among patients who had a recent MI or a stroke. Some patients exhibit nonresponsiveness to clopidogrel, and although ticagrelor has been considered an effective substitute, it was not shown to be superior to aspirin alone for secondary prevention of stroke, MI, or death at 90 days in the Acute Stroke or Transient Ischemic Attack Treated with Aspirin or Ticagrelor and Patient Outcomes (SOCRATES) trial. The choice of antiplatelet therapy is dependent mainly on patient tolerance, contraindications, availability, and cost, and while the combination of aspirin plus extended-release dipyridamole has shown modest benefit over aspirin, its side effect profile with headache and gastrointestinal symptoms has limited its use. There is some data on the use of cilostazol as a second-line agent for patients with aspirin allergy in Asian patients, but high-quality data supporting the use of cilostazol in non-Asian ethnic groups is lacking and has limited its use.
Due to the high stroke recurrence rate in the early period after the index stroke/TIA, there has been an ongoing interest to find optimal treatment
strategies to minimize this risk. The Clopidogrel in High-Risk Patients with Acute Nondisabling Cerebrovascular Events (CHANCE) trial randomized patients with acute high-risk TIA or a minor stroke to an interventional arm consisting of treatment with aspirin and clopidogrel for 3 weeks post the index event followed by placebo until 90 days and a control arm consisting of daily aspirin-treated patients and compared the primary outcome of recurrent stroke among these patients in 114 clinical centers in China. A total of 5170 patients were enrolled, and recurrent strokes occurred more frequently in the control arm at 11.7% versus the interventional arm at 8.2%, showing that addition of clopidogrel within first 24 hours after the index high-risk TIA or minor stroke for a total of 21 days resulted in 32% risk reduction for recurrent stroke, without increasing the bleeding risks significantly. Recently, a multicenter, multinational, randomized clinical trial, titled Platelet-Oriented Inhibition in New TIA and minor stroke (POINT), tested the efficacy of a combination of dual antiplatelet therapy (DAPT) in the intervention arm against single antiplatelet therapy in the control arm for a total of 90 days after the index high-risk TIA or minor stroke and enrolled 4881 patients in North America, Europe, Australia, and New Zealand. The primary efficacy outcome for recurrent ischemic stroke, MI, or death from ischemic vascular causes was significantly lower in the intervention arm (5%) versus control arm (6.5%); however, the risk of major hemorrhage in the intervention arm (0.9%) was also higher as compared to the control arm (0.4%). The reason for higher risk of major hemorrhage was thought to be possibly related to the higher loading dose of clopidogrel, and a longer duration of treatment with DAPT in the POINT trial (3 months) as compared to CHANCE trial (3 weeks). Moreover, the benefit of DAPT in recurrent ischemic stroke reduction was shown to be maximum in the first 30 days after the index event, whereas the bleeding risk built up after a week of initiating the DAPT. Combining the findings of these two studies, the current AHA/ASA guidelines on early management of acute ischemic stroke strongly
recommend considering a short course of DAPT with aspirin and clopidogrel initiated within 24 hours from symptom onset after a high-risk TIA or a minor ischemic stroke (provided the patient did not receive IVT) for a total of 21 days. This benefit of DAPT was mirrored in another randomized clinical trial, Acute Stroke or Transient Ischemic Attack Treated with Ticagrelor and ASA for Prevention of Stroke and Death (THALES), that utilized ticagrelor as the second antiplatelet agent substituting clopidogrel. However, the risk of
major hemorrhage, including fatal and intracranial hemorrhage, was elevated among patients randomized to the intervention arm receiving aspirin plus ticagrelor for a total of 30 days.
Acute cerebral venous sinus thrombosis (CVST) may have varied presentations including ischemic stroke, ICH, headache, seizures, etc. It is routine practice to consider early anticoagulation with either unfractionated heparin (UFH) or low-molecular-weight heparin (LMWH) for the treatment of this condition, oftentimes even if the patient presents with baseline ICH. Following a period of neurological stabilization, the mode of anticoagulation is switched to oral anticoagulants, such as warfarin for a period of 3 to 6 months, or until resolution of CVST on follow-up imaging depending on the risks versus benefits. In a recent study, dabigatran was found to have similar efficacy in patients with mild to moderate CVST in preventing recurrent sinus thrombosis.
Extracranial cervical artery dissection is another established etiology of acute ischemic stroke, which is more common in young adults as opposed to the geriatric population. It may present with headache, Horner syndrome, cranial neuropathies, or ischemic stroke/TIA. Although evidence in support of optimal treatment is limited, it suggests that the either aspirin/dual antiplatelet therapy or anticoagulation for short term may be reasonable choices for secondary stroke prevention. The choice of antithrombotic regimen should be based on the treating physician’s experience, radiological findings including size of stroke, if any, patient comorbidities after a detailed discussion of risks versus benefits with the patient, and consideration of patient preference for subsequent therapy.
Early recurrent stroke prevention management is dependent on the presumed stroke mechanism, as explored below.
Extracranial Large-Artery Atherosclerosis Surgical therapeutic options can be considered in this subacute phase. Carotid revascularization procedures may be applied in two distinct clinical settings: (1) symptomatic disease and (2) asymptomatic disease. The efficacy of carotid endarterectomy for symptomatic disease is high when there is a 70% to 99% stenosis, but modest when there is a 50% to 69% stenosis. Carotid endarterectomy for mild-to-moderate stroke or TIA in the territory of the ICA has been proven effective by the North American Symptomatic Carotid Endarterectomy Trial (NASCET) study, for patients with both 70% to 99% and 50% to 69% stenosis. Patients with severe stroke in the middle cerebral artery territory
are not appropriate for carotid endarterectomy in the subacute phase. If the patient with severe stroke improves in rehabilitation to the point where worsening of the deficit would be problematic from recurrent stroke, then endarterectomy can be reconsidered. The surgical benefit for patients with symptomatic moderate stenosis was statistically significant but small, with only 1.5% absolute risk reduction per year, and therefore surgery should only be considered in centers with low perioperative rates of surgical morbidity. The decision to perform carotid endarterectomy for symptomatic moderate carotid stenosis should be made on an individual basis, considering the patient’s surgical risk, center-specific perioperative complication rate, and the patient’s life expectancy. Because the surgical risk occurs upfront during the perioperative period, the benefit from surgery accrues with increased years of life. For the older adults, the risks and benefits must be carefully weighed, and surgery should be withheld for patients with life expectancy fewer than 5 years. When there is symptomatic severe stenosis (70%–99% luminal narrowing), the surgical benefit is, by contrast, quite high, and most patients will benefit from surgical rather than medical management.
Carotid artery stenting (CAS) with angioplasty and stenting of the ICA plaque has gained ground as an alternative to carotid endarterectomy.
Advantages include shorter hospital stay, lack of a neck incision, avoidance of general anesthesia, and a lower incidence of cranial neuropathy as a complication. A significant concern, however, is the risk of distal embolization of thrombus or fragments of atheroma dislodged during arterial access, balloon inflation, or stent deployment. An evolving array of embolization protection devices, deployed distal to the stent, are designed to limit this risk. Stenting is still an option for patients at high risk for complications from endarterectomy, including those with medical comorbidities, unfavorable neck anatomy, contralateral carotid occlusion, and restenosis at previous endarterectomy site. Randomized trials have addressed whether stenting can produce results as good as endarterectomy.
Two trials, Endarterectomy versus Angioplasty in Patients with Symptomatic Severe Carotid Stenosis (EVA-3S) and Stent-Supported Percutaneous Angioplasty of the Carotid Artery versus Endarterectomy (SPACE), showed that, in symptomatic patients with greater than 60% ICA stenosis, carotid endarterectomy was associated with lower 30-day rates of major complications such as stroke and MI when compared to carotid stenting. By contrast, the Stenting and Angioplasty with Protection in Patients at High
Risk for Endarterectomy (SAPPHIRE) trial, which included only patients who were deemed poor operative candidates for endarterectomy, showed equivalency between stenting and endarterectomy, with slightly lower complication rates in the stenting group. The Carotid Revascularization Endarterectomy versus Stent Trial (CREST) randomized patients with recent hemispheric TIA, ocular TIA, or minor stroke, with at least 50% to 70% carotid stenosis. There were no significant differences between endarterectomy and stenting for the composite outcome of stroke, MI, or death over a 4-year follow-up period. However, during the 30-day periprocedural period there were significantly higher risks of stroke in patients undergoing stenting, and of MI in patients undergoing endarterectomy. Of interest, an interaction between age and treatment efficacy was detected (p = 0.02), with a crossover at an age of approximately 70 such that CAS tended to show greater efficacy at younger ages, and carotid endarterectomy at older ages. Thus, the treatment choice for carotid revascularization in symptomatic carotid artery stenosis is dependent on multiple factors and is usually decided after a multidisciplinary discussion with the neurologists, neurointerventionalists, and specialists performing carotid endarterectomy with shared decision-making involving the patient/family members considering the risks/benefits and patient preference for each of these procedures.
The timing of carotid revascularization for ischemic stroke is of major importance. If the ischemic stroke is small, then early intervention, up to 2 weeks poststroke, is preferable. In more extensive strokes, however, the risk of cerebral reperfusion injury after a revascularization procedure is high, especially in severe or critical intracranial carotid stenosis. In this circumstance, carotid revascularization procedures should be deferred to a later date to allow the infarcted territory to heal and to allow normalization of the local cerebral blood volume. Precise assessment of the degree of severity of the stenosis and its effect intracranially is extremely helpful when determining the urgency of timing of surgery. In addition, an image through CTA or MRA is also helpful in identifying the pathology not only in the ICA origin, but also in the arteries distally and in those arteries providing collateral flow. On the other hand, the benefit of carotid revascularization procedures for asymptomatic carotid artery stenosis (lack of ipsilateral stroke or TIA in the past 6 months) is currently not well established.
Although earlier clinical trials provided some evidence of a modest benefit
in this population, it is important to note that at the time of those trials, the medical management strategies were not as advanced as they are today and were not able to successfully achieve the currently accepted goals of lipid lowering, blood pressure reduction, and platelet inhibition. Moreover, with a strong emphasis on healthy lifestyle changes in addition to the medical management strategies, it is currently unclear whether carotid revascularization among patients with asymptomatic carotid stenosis is truly superior to conservative management with a combination of medical management and lifestyle changes for the goal of stroke prevention. As we eagerly await the results of CREST-2, carotid revascularization for asymptomatic carotid stenosis is not offered to patients routinely but considered on a case-by-case basis.
Cardioembolic Stroke For patients with nonvalvular AF, anticoagulant therapy with warfarin has been proven by randomized clinical trials to reduce the risk of recurrent cerebral or systemic embolism by close to 70%, as compared to aspirin. The international normalized ratio (INR) goal is 2 to 3. Novel oral anticoagulants (NOACs) such as the direct thrombin inhibitor dabigatran, and direct factor Xa inhibitors rivaroxaban, apixaban, and edoxaban have been shown to be noninferior to warfarin in prevention of cerebral or systemic embolism in nonvalvular AF. Further, NOACs as a group have a significantly lower risk of intracranial hemorrhage as compared to warfarin, despite a significantly higher risk of major systemic bleeding, mainly gastrointestinal. Despite this, there appears to be a net therapeutic benefit of NOACs over warfarin therapy. An additional advantage to these novel anticoagulants is that they do not require routine hematologic monitoring. Their current major disadvantage, however, is the limited availability of specific antidotes to counter bleeding from these medications. Anticoagulation is relatively contraindicated in infective endocarditis and in left atrial myxomatous syndromes because of associated cortical surface mycotic and myxomatous aneurysm with the risk of hemorrhage in the early phases of these diseases.
In advanced congestive heart failure, the risk of ischemic stroke or systemic embolism may be elevated due to the formation of intracardiac clots, especially in extensive anterior wall MI with an apical aneurysm. Anticoagulation may be recommended early on in this setting to prevent initial or recurrent embolism. The Warfarin versus Aspirin in Reduced Cardiac Ejection Fraction (WARCEF) study enrolled patients with left
ventricular ejection fraction less than 35% and who were in sinus rhythm. Patients were randomized to aspirin 325 mg daily or warfarin with goal INR 2 to 3, and were followed up to 6 years, with mean follow-up of 3.5 years.
There was no significant overall difference in the risk of ischemic stroke, hemorrhagic stroke, or death between treatment with warfarin and treatment with aspirin. A reduced risk of ischemic stroke with warfarin was seen, but this was offset by an increased risk of major systemic hemorrhage. Thus, there is currently no clear evidence for anticoagulation of patients with congestive heart failure past the acute to subacute phases of ischemic stroke. A PFO as a source of paradoxical embolism (thromboembolism from the veins into the arteries) causing a stroke has attracted a lot of interest over the last several decades. Earlier clinical trials showed no benefit of endovascular PFO closure over medical management with antiplatelet therapy, but recently several trials have shown a modest reduction in the risk of secondary stroke in carefully selected patients that underwent endovascular PFO closure. It is important to note that all these trials enrolled patients between 18 and 60 years, who were thought to be higher risk of paradoxical embolism and majority of the patients had high-risk PFO morphology that rendered them at a higher risk of secondary stroke due to the PFO. It is currently unclear whether endovascular PFO closure among these patients is superior to therapeutic anticoagulation.
Lacunar Infarction The mainstay of early therapy in lacunar infarctions, in patients who do not receive IVT, is mostly supportive, aiming to avoid clinical decompensation, which can occur in the form of a stuttering lacunar syndrome in up to 30% of patients. Antiplatelet therapy should be instituted early on, blood pressure allowed to autoregulate, and glucose closely monitored. The choice of antiplatelet is mainly individual as all are equally effective for secondary ischemic stroke prevention, as discussed later in this section. Longer-term DAPT with aspirin 325 mg and clopidogrel 75 mg daily was not superior to aspirin therapy alone for reduction of recurrent lacunar stroke risk in the Secondary Prevention of Small Subcortical Strokes (SPS3) trial. Further, this combination promoted harm, leading to a significantly higher risk of bleeding and death. There is also no clear evidence that anticoagulation can improve clinical outcomes in lacunar stroke. Therefore, dual antiplatelet therapy and anticoagulation are not recommended for patients with lacunar stroke.
Intracranial Atherosclerosis For patients with symptomatic severe intracranial stenosis, warfarin was also shown in a randomized trial to be no more effective than aspirin in preventing recurrent stroke. A more recent trial, the Stenting and Aggressive Medical Management for Preventing Recurrent Stroke in Intracranial Stenosis (SAMMPRIS) trial, randomized patients with severe (> 70%) symptomatic intracranial stenosis to intensive medical therapy alone versus intensive medical therapy and intracranial stenting with the Wingspan device. All patients received aspirin 325 mg daily and clopidogrel 75 mg daily for 90 days. Intensive medical therapy consisted of management of primary and secondary risk factors, aggressive blood pressure control, and intensive lipid lowering (target LCL-C < 70 mg/dL).
The trial was stopped prematurely as the rate of stroke or death was significantly higher in the stenting group as compared to the patients receiving medical management only. This has led to widespread adoption of the medical management practices implemented in SAMMPRIS (DAPT, high-dose statins, tight blood pressure control, and healthy lifestyle and dietary habits for secondary stroke prevention in patients with symptomatic ICAD).
Cryptogenic Embolism In cryptogenic embolism, debate exists as to the efficacy of anticoagulant therapy. Aspirin or other antiplatelet agents have been prescribed in lieu of anticoagulant therapy, but their efficacy also has not been proven by randomized clinical trials. Stroke and TIA subtypes were not identified in those studies in which aspirin was compared to placebo or to another antiplatelet agent for the secondary prevention of recurrent stroke or TIA after a previous minor primary stroke or TIA. One study in which the primary and secondary stroke subtypes were accurately identified was WARSS, which compared the efficacy of warfarin to aspirin in secondary stroke prevention. When all the stroke subtypes were combined, there was no difference in the risk of stroke with warfarin compared to aspirin. When the data for primary individual stroke subtypes were analyzed in terms of secondary stroke prevention, there was a trend in favor of anticoagulant therapy for patients with cryptogenic embolism. More recently, ESUS has become a popular term denoting patients in whom there is a strong suspicion for an underlying embolic source, but without a definite diagnosis after a standard stroke work-up. Two large-scale randomized clinical trials investigated the benefits of utilizing anticoagulation with NOACs versus standard antiplatelet management among ESUS patients. The Rivaroxaban
Versus Aspirin in Secondary Prevention of Stroke and Prevention of Systemic Embolism in Patients with Recent Embolic Stroke of Undetermined Source (NAVIGATE ESUS) enrolled 7213 participants and randomized half of them to receive the intervention with rivaroxaban 15 mg PO daily and the other half to standard of care aspirin daily. While rivaroxaban was not associated with any benefit in the rate of occurrence of primary outcome (recurrent ischemic or hemorrhagic stroke or systemic embolism), the rate of major bleeding, life-threatening bleeding, and hemorrhagic stroke was higher among patients receiving the study medication versus the standard of care.
Similarly, another trial compared the efficacy of dabigatran against aspirin in patients with ESUS and found no benefit in reduction in the rate of recurrent stroke (ischemic or hemorrhagic), while increasing the risk of bleeding complications. Because of these two trials, routine anticoagulation among patients with ESUS is not recommended.
Giant Cell Arteritis (GCA) and Vasculitis Immunosuppression is the mainstay treatment for patients with tissue diagnosis of GCA or vasculitis. Maintaining a high index of suspicion is critical in the diagnosis of GCA, as early institution of glucocorticoids has been shown to minimize disability and to reduce the recurrence of flare ups in this condition. Long-term steroid therapy is usually pursued, with the introduction or other immunomodulatory medications such as tocilizumab, an IL-6 inhibitor to minimize recurrence of neurological and/or ophthalmological symptoms. The duration of immunosuppression is usually determined in a multidisciplinary setting with consensus from the treating neurologists, rheumatologists, and ophthalmologists in the context of patient’s comorbidities that influence the risks versus benefits of such long-term immunosuppression. The role of antiplatelet therapy is not well studied and again is usually determined based on individual risks versus benefits of such therapy.
Management of malignant cerebral infarction Hemicraniectomy may be considered when progressive cerebral edema in the nondominant hemisphere becomes severe enough to compromise cerebral perfusion or cause brain herniation. This operation can prevent herniation and death by relieving intracranial pressure (ICP) but is unlikely to restore significant neurologic function because it is typically performed in the setting of extensive brain infarction. A pooled analysis of three small trials confirms that hemicraniectomy prevents severe dependency or death in patients younger than 60 years.
Among patients 61 years or older with a malignant middle cerebral artery
infarction, hemicraniectomy increased survival without severe disability. However, most survivors required assistance with most bodily needs.
In cerebellar ischemic strokes larger than 2 to 3 cm in diameter, decompressive posterior fossa craniectomy may also be performed if the patient shows signs of clinical deterioration, to avoid brain stem compression and upward herniation.
Long-term Preventive Strategies
Preventive strategies for controlling primary risk factors Treatment of hypertension is, by far, the most important aspect of risk factor control. Achieving and maintaining blood pressure treatment targets are probably more important than class effects of the drugs. Among other recommendations for a healthy cardiovascular health, AHA/ASA strongly encourages people to aim achieving optimal targets in seven metrics that compose Life’s Simple 7 as follows: (1) no smoking or quitting smoking for > 12 months; (2) BMI < 25
kg/m2; (3) physical activity of at least 150 min/week of moderate activity or at least 75 min/week of vigorous activity or at least 150 min/week of moderate and vigorous activity; (4) diet that includes (a) more than 4.5 cups/day of fruits and vegetables, (b) more than 2 servings/week of fish, (c) more than 3 servings/day or whole grains, (d) no more than 36 oz/week of sugar-sweetened beverages, and (e) 1500 mg/day of sodium; (5) blood pressure < 120/80 mm Hg, (6) total cholesterol < 200 mg/dL; and (7) fasting plasma glucose < 100 mg/dL. Following the Third Report of the National Cholesterol Education Program Expert Panel on Detection, Evaluation, and Treatment of High Blood Cholesterol in Adults (ATP III) guidelines for prevention of MI, the primary goal is lowering the low-density lipoprotein (LDL). Risk factor adjustments including diet, exercise, weight reduction, cessation of smoking, and diabetes management are all important first steps. Hydroxymethylglutaryl coenzyme A (HMG-CoA) reductase inhibitors (“statins”) are associated with reduction in the risk of stroke, as well as reduction in the risk of MI. The Stroke Prevention with Atorvastatin to Reduce Cholesterol Levels (SPARCL) trial showed that, among patients with large-vessel atherothrombotic or lacunar stroke subtypes and LDL greater than 100 mg/dL, high-dose atorvastatin (80 mg every day) was superior to placebo in reducing subsequent stroke. Recently, the Treat Stroke to Target (TST) trial established the benefit of intensive treatment to achieve serum LDL < 70 mg/dL among patients who have had a recent stroke or a TIA and
also have evidence of atherosclerotic disease (intracranial/extracranial atherosclerotic stenosis, history of coronary artery disease, or aortic arch atheroma) in preventing a composite primary outcome consisting of recurrent ischemic stroke, MI, new symptoms leading to urgent coronary or carotid revascularization, or death from cardiovascular causes.
Stroke recovery Patients with stroke-related disability should be evaluated by specialists in physical therapy, occupational therapy, or speech therapy as appropriate. A substantial number of patients may benefit from inpatient rehabilitation following their acute hospital stay. There is evidence from observational studies that patients treated in dedicated stroke units, with experienced physician, nursing and rehabilitation staff, have better outcomes than those treated in general medical or surgical wards.
Stroke recovery may require 6 months to a year, or even longer, because of the slow nature of neuroplastic changes following brain injury. Late complications of stroke include spasticity, contracture, pressure ulcers, shoulder joint dislocation, and depression. The injection of botulinum toxin in affected muscles has been shown to improve symptoms of stroke-related spasticity. Earlier, a recovery trial had shown that early use of fluoxetine with physical therapy enhanced motor recovery in stroke patients with moderate-to-severe deficits, independent of the drug’s effect on mood; however, three recently concluded trials did not show the benefit of oral fluoxetine on functional outcome after stroke. However, it did show an increased risk of bone fractures among patients receiving fluoxetine and is as such not recommended routinely for indications other than mood disorders among stroke survivors.
INTRACRANIAL HEMORRHAGE
Intracranial hemorrhages are classified according to the site of origin. The diagnosis, treatment, and secondary prevention of intracranial hemorrhage depend on the assessment of the underlying specific pathophysiology, analogous to the way in which ischemic stroke diagnosis and treatment depend on identifying the ischemic stroke subtype. There are four locations of origin: intracerebral, subarachnoid, subdural, and epidural. Subdural and epidural hemorrhages are predominantly traumatic and are not considered a form of hemorrhagic stroke. This section will therefore be concerned with ICH and SAH. Hemorrhagic stroke accounts for approximately 15% of all
strokes. ICH may be further divided by location into deep ICH (arising in the deep hemispheric portions of the brain, including the basal ganglia, thalamus, brain stem, and cerebellum) and lobar ICH (arising in the cortex or at the junction of the cortex and the white matter). Hypertension is the most common cause of spontaneous ICH, while CAA is the most common cause of spontaneous lobar ICH in the older adults.
Intracerebral Hemorrhage
Deep hemispheric hemorrhage Hypertension is by far the major cause of hemorrhage in deep brain locations. The most common sites are the putamen, thalamus, pons, and cerebellum. A penetrating artery arising from one of the major intracranial arteries (middle cerebral stem or M1 segment, basilar artery, or PCA) is generally the source of the hemorrhage. These same vessels are also frequently affected by lipohyalinosis, and when occlusion occurs rather than rupture, lacunar infarction is the outcome.
Clinical Presentation
Each of the four sites of deep ICH produces a characteristic clinical syndrome. In putaminal hemorrhage, there is contralateral hemiplegia and conjugate deviation of the eyes toward the hemorrhage side. Stupor is evident at the onset in most cases. Thalamic hemorrhage typically presents with contralateral sensory loss and gaze abnormalities. Involvement of the internal capsule results in contralateral hemiparesis. Aphasia can occur in left thalamic ICH and visuospatial abnormalities can occur in right thalamic ICH. Involvement of the reticular activating system results in reduced consciousness and sleepiness. Pontine hemorrhage produces coma, quadriplegia, decerebrate rigidity, impairment of horizontal eye movements, and pinpoint pupils. Cerebellar hemorrhage typically presents with headache, vertigo, nausea, vomiting, and instability of gait. Facial weakness and gaze palsies can also occur. Coma may result from brain stem compression or upward or downward herniation of cerebellar structures.
Lobar hemorrhage Lobar hemorrhages occur spontaneously in the supratentorial white matter and cerebral cortex of all lobes of the brain. The manifestations are dependent on the area involved. Frontal hemorrhages may present with contralateral hemiparesis, expressive aphasia, and gaze deviation toward the hemorrhage. Parietal hemorrhages may result in contralateral neglect and sensory loss, while occipital hemorrhages could present with contralateral
homonymous hemianopia. Temporal hemorrhages of the dominant hemisphere may present with receptive aphasia. Headache is sometimes present and may be most severe near the location of the hemorrhage. If the hemorrhage is large, then depressed consciousness may be present. The symptoms usually evolve over minutes or hours, in contrast to embolic ischemic stroke where the onset of symptoms is abrupt. A precise cause of lobar ICH is found in a significant number of cases: CAA and vascular malformations are the most common; metastatic disease is less frequent, but well known. A proportion of lobar ICH is likely caused by hypertension. Lobar ICH is more readily accessible for surgical evacuation because of its superficial location. A subgroup analysis of the Surgical Trial for Intracerebral Hemorrhage (STICH) showed a benefit for surgery, compared to initial conservative management, for ICH less than 1 cm from the cortical surface.
CAA is a common cause of both single and recurrent lobar hemorrhages in the older population and is diagnosed conclusively only by postmortem demonstration of amyloid in the media of cortical and leptomeningeal arterioles and capillaries. Sporadic amyloid angiopathy is caused by the deposition of β-amyloid only in the cerebral arteries, without systemic amyloidosis. But the clinical history of repeated supratentorial lobar hemorrhages and the demonstration of small, 1- to 2-mm areas of hypointensity on MRI susceptibility-weighted sequences, indicative of prior small asymptomatic hemorrhages (cerebral microbleeds), strongly suggest the diagnosis. These small, silent hemorrhages may be the cause of recurrent focal symptoms sometimes seen in these patients. A clinical-pathologic correlation study suggests that CAA is the cause of approximately 70% of primary lobar ICH in persons older than 55 years. Other than avoidance of antithrombotic medications, treatment options remain elusive. There is a high recurrence rate of 10% to 14% per year in sporadic lobar ICH, mostly caused by CAA, which is significantly higher than the recurrence rate of 2% to 4% per year observed in survivors of deep ICH. Careful control of associated hypertension, if present, seems prudent.
A small number of cases are associated with vascular or perivascular inflammation that frequently responds to a pulse of steroids, although late relapses may occur. Inflammatory CAA typically presents with cognitive impairment, seizure, or focal neurologic signs rather than hemorrhagic stroke. Asymmetric white and gray matter hyperintensities, often with small silent
hemorrhages on susceptibility sequence, are observed on MRI. Biopsy is usually indicated to exclude other causes of vasculitis.
Lobar Hemorrhages Caused by Metastatic Disease Cerebral metastases, particularly malignant melanoma, may give rise to cerebral hemorrhage. Usually, the metastases are multiple and can easily be demonstrated by contrast MRI or CT scans.
Evaluation of ICH: Noncontrast head CT scan has excellent sensitivity for ICH and is the initial modality of choice in ICH diagnosis. There are no clinical signs or symptoms specific for ischemic stroke compared to hemorrhagic stroke, making CT mandatory in all patients presenting with potential stroke. Vessel imaging with CTA or MRA should be performed for the diagnosis of cerebral aneurysm or vascular malformation. Venous imaging with CT venography (CTV) or MR venography (MRV) should be performed if concern for venous sinus thrombosis exists. Catheter angiography should be performed if suspicion remains high for vascular abnormality despite negative CTA or MRA. Contrast-enhanced CT or MRI should be performed when suspicion of underlying mass lesion exists. A “spot sign” on CTA is the finding of contrast extravasation into a hematoma and is associated with hemorrhage expansion and poor outcome. MRI also has excellent sensitivity for detecting ICH. Hemorrhage on special T2- weighted sequences such as gradient recalled echo (GRE) or susceptibility- weighted imaging (SWI) appears as hypointensity or low signal compared to surrounding brain. Cerebral microbleeds are small areas of hypointensity that represent chronic small hemorrhage. Their location often points to an underlying etiology with deep microbleeds mainly due to hypertension and lobar microbleeds due to CAA. Older patients with a history of hypertension and hemorrhage in a typical location do not need angiographic assessment. In contrast, younger patients and those with an atypical-appearing hemorrhage should have angiography. When hemorrhages occur in atypical locations or in the absence of a history of hypertension, a follow-up MRI brain scan with and without contrast is performed in up to 3 months, following ICH resorption, to screen for an underlying lesion such as a vascular malformation or tumor. Acute laboratory assessment should include platelet count, prothrombin time, partial thromboplastin time, and INR to exclude bleeding diathesis. An ESR, CRP, and blood cultures should be considered in patients at risk for septic embolism from bacterial endocarditis. The history, physical examination, and imaging should exclude secondary causes
of hemorrhage such as coagulopathy, brain tumor, aneurysm rupture, and hemorrhagic transformation of ischemic infarction.
The size and location of the hematoma determine the treatment and prognosis. Supratentorial hematomas greater than 5 cm in diameter have a poor prognosis. Infratentorial hematomas greater than 3 cm in size are generally fatal if they are in the pons. Other factors associated with poor prognosis are the presence of intraventricular blood and worse level of consciousness at presentation.
Treatment of ICH: All patients with ICH should be initially managed in an intensive care unit. General supportive care should be provided.
Eunatremia, normoglycemia, and normothermia are mainstays of treatment. Seizures may occur and should be treated with anticonvulsant medications when they occur. Seizure prophylaxis is generally not recommended. Patients taking warfarin at the time of ICH must have the INR aggressively corrected emergently with vitamin K and vitamin K–dependent factors and the warfarin discontinued. Patients with ICH on antiplatelet medications should have these antiplatelet medications discontinued. A randomized clinical trial evaluated the utility of platelet transfusion in patients with spontaneous intracranial hemorrhage and found empiric platelet transfusion to be associated with a higher rate of death or dependence at 3 months as compared to standard of care and is currently not recommended routinely unless there is a neurosurgical procedure planned. Prompt anticoagulation reversal may be useful in reducing hematoma expansion and subsequent morbidity/mortality but this often comes at a cost of inadvertent procoagulant effects of the anticoagulant reversal agent that may contribute to thrombotic complications. For warfarin, IV vitamin K is used along with fresh frozen plasma/cryoprecipitate or prothrombin complex concentrate (PCC). NOAC- specific reversal agents (idarucizumab for dabigatran and andexanet alpha for direct thrombin inhibitors) are currently not widely available on a routine basis; therefore, PCC is often used in that scenario. The relationship between blood pressure and outcomes after ICH is complex. Although blood pressure reduction is crucial in prevention of hematoma expansion, the exact levels of blood pressure reduction have yet to be defined. Current guidelines state that it is likely safe to reduce SBP to 140 mm Hg acutely when the presenting SBP is between 150 and 220 mm Hg. For patients presenting with SBP > 220 mm Hg, aggressive reduction of blood pressure with continuous intravenous infusion of antihypertensive medication and frequent blood pressure
monitoring to maintain SBP around 140 mm Hg to 160 mm Hg is recommended. In clinical trials, a more sudden reduction of blood pressure to achieve SBP < 140 mm Hg was associated with increased renal complications and was not clearly associated with any benefit in overall outcomes. Elevated ICP can result either from hematoma or associated cerebral edema. Patients with Glasgow Coma Scale (GCS) less than or equal to 8, evidence of transtentorial herniation or hydrocephalus, or significant intraventricular hemorrhage should have ICP monitoring and treatment.
Osmotic agents such as mannitol or hypertonic saline are frequently used to lower ICP. Ventriculostomy with an external ventricular drain is often used in obstructive or communicating hydrocephalus.
The surgical indications for ICH depend mainly on the location of the ICH. Patients with cerebellar hemorrhages greater than 3 cm and neurologic deterioration or brain stem compression and/or hydrocephalus should have early hematoma evacuation. For other locations, surgical evacuation is controversial. The STICH trial failed to find a benefit for surgical evacuation over medical management. In STICH, patients with ICH less than 1 cm from the cortical surface had a trend toward favorable outcome with early surgery. Surgical evacuation for deep ICH accompanied by decreasing level of consciousness caused by increasing mass effect is also considered. Less invasive surgical methods are currently being developed and could be used widely in the future.
Vascular malformations Increasingly recognized with advances in neuroimaging techniques, vascular malformations are classified into four types: venous malformations, capillary telangiectasias, arteriovenous malformations (AVMs), and cavernous malformations or angioma. Vascular malformations may cause hemorrhage in either deep or lobar locations.
The management of patients with vascular malformations is best accomplished by an experienced team comprised of neurosurgeons and physicians who can consider both surgical and endovascular approaches, sometimes in combination. Radiosurgical obliteration may also be an option in some cases (ie, deep, inaccessible lesions associated with repeated hemorrhages or progressive neurologic deficits). Each case requires a unique approach that takes into account the extent and location of the vascular malformation, and the feasibility and safety of the various therapeutic approaches.
Subarachnoid Hemorrhage
The most common cause of atraumatic SAH is rupture of an aneurysm. Other less-common causes include blood dyscrasia or leukemia, tumors (such as ependymoma or meningioma, glioblastoma, renal cell, or metastasis), vascular malformation, or, rarely, venous sinus disease or meningitis.
Ruptured aneurysms occur at the branch points of arteries at the base of the brain and following rupture, give rise to SAH, or sometimes ICH. Histologic examination shows interruption of the internal elastic lamina with an aneurysmal outpouching that appears like a berry. Although usually single, aneurysms can be multiple in 15% to 20% of cases. The most common sites are the anterior communicating artery, junction of the posterior communicating artery with the ICA, middle cerebral stem bifurcation, top of the basilar artery, origin of the major branches of the basilar artery, and at the origin of PICA. Intracranial aneurysms can be associated with coarctation of the aorta and are more common in autosomal domination polycystic kidney disease.
Prodromal symptoms and signs, prior to rupture occur in as many as a third of cases. These symptoms may include headache, diplopia, or blurred vision. Pinpoint pain behind the eye with or without a third nerve palsy is the most common and indicates the presence of an aneurysm at the posterior communicating artery–ICA junction. It represents a medical emergency.
Juxtaclinoid aneurysms compress the optic nerve, leading to amblyopia. Supraclinoid aneurysms can be confused with suprasellar tumors, sometimes producing a hypothalamic syndrome. Aneurysms in the vertebrobasilar system can produce occipital headaches and cerebellar or long-tract signs as well as cranial nerve deficits.
The most common symptom of aneurysmal rupture is a sudden, severe headache, often characterized as the “worst headache of life.” Other symptoms may include loss of consciousness, nausea and vomiting, and meningismus. Focal neurologic signs occur infrequently. Focal signs occur when the hemorrhage ruptures into the brain parenchyma and help in localization of the rupture site. For example, anterior communicating artery aneurysmal rupture produces a slowed or abulic state with contralateral weakness of the leg while middle cerebral artery aneurysm rupture produces contralateral hemiparesis and aphasia if the dominant hemisphere is involved.
Evaluation of SAH: A noncontrast head CT is the initial modality of choice and has excellent sensitivity for SAH. At our institution a CTA of the head, to look for a cerebral aneurysm, is also performed at the time of the noncontrast head CT. If the head CT is negative for SAH, but high suspicion remains, a lumbar puncture (LP) must be performed. The main findings consistent with SAH on the LP are elevated red blood cell count, xanthochromia, and elevated opening pressure.
Delayed clinical syndromes mainly consist of re-rupture syndromes, hydrocephalus, and the syndromes resulting from cerebral vasospasm. Re- rupture is most frequent during the first 72 hours after the initial rupture.
Early surgical or endovascular intervention now precludes many re-ruptures. Fever (≥ 38°C) in the absence of a discernible infective etiology is common in SAH, and in some instances, may be confused with florid meningitis.
Hydrocephalus may be acute in the first day or two after the SAH and requires ventricular drainage. Delayed hydrocephalus presents several days to weeks after the SAH and may require ventriculoperitoneal shunting.
Worsening stupor is a sign of both early and delayed hydrocephalus. Cerebral vasospasm usually develops between days 4 and 14 following the SAH. Its location and severity have been related to the extent and location of the subarachnoid blood, with thick clot typically present around the artery developing spasm. In 30% of cases, the spasm is severe enough to give rise to ischemic symptoms and infarction may ensue. In middle cerebral artery stem vasospasm, the resulting infarction may cause devastating cerebral edema. The extent and location of blood in the basal cisterns postoperatively may represent patients likely to have severe vasospasm to develop signs of ischemia or infarction.
Management of the patient with SAH from a ruptured berry aneurysm should focus on (1) medical stabilization, with aggressive treatment of elevated blood pressure and good supportive neurocritical care, (2) early surgical clipping or endovascular coiling in non-moribund patients to prevent re-re-rupture, and (3) prevention and treatment of delayed ischemia caused by vasospasm. The International Subarachnoid Aneurysm Trial (ISAT) showed that, for aneurysms equally accessible for surgical clipping or endovascular treatment with detachable coils, the endovascular strategy was associated with less death or dependency at 1 year. Volume expansion and blood pressure elevation together with calcium channel blockers are often used to prevent symptomatic ischemia from vasospasm. Oral nimodipine, 60
mg given every 4 hours, has been shown in multiple clinical trials to reduce mortality and the incidence of delayed ischemia. For refractory cases local intra-arterial infusion of vasodilators such as nicardipine, or angioplasty of the affected arteries, may alleviate arterial stenosis caused by vasospasm.
Given the above complications and expertise necessary to perform highly technical neurosurgical and endovascular approaches to obliterate aneurysms, the patient should be managed at a center capable of carrying out these maneuvers.
PALLIATIVE CARE
Palliative care is an approach that optimizes quality of life, comfort, and family-centered care by anticipating, preventing, and treating suffering. It involves addressing physical, intellectual, emotional, social, and spiritual needs of the patients and their loved ones. Stroke is associated with significant morbidity and mortality, especially in older adults. Large ischemic strokes, catastrophic ICHs and SAHs carry a significant mortality and disability in older patients and almost half of the deaths occur in an inpatient setting (acute care hospital or rehabilitation facilities). Primary palliative care is administered by the patient’s primary treatment team that may consist of neurologists, hospital medicine specialists, physical and rehabilitation medicine specialists, speech therapists, etc. Pain is a significant symptom among poststroke patients and should be adequately addressed to enhance patients’ well-being and neurological recovery. Some of the common pain syndromes include central poststroke pain, hemiplegic shoulder pain, and painful spasticity. While amitriptyline for relief of pain is often the drug-of-choice in younger patients, caution should be exercised when prescribing this medication to older adults because of its side effects, and often nortriptyline is the preferred treatment. For patients with hemiplegic shoulder pain, heat, ice, soft tissue massage, or intra-articular steroid injections, intramuscular botox injections, etc. are considered reasonable treatment choices. Other comorbidities such as fatigue, urinary incontinence, sleep-disordered breathing, depression, and anxiety should be promptly identified and addressed.
An essential part of establishing goals of care for a stroke patient is to obtain a thorough understanding of the aspects of functional recovery that are crucial to the individual patient—for example, ability to ambulate, ability to communicate, etc. A lot of patients express their strong desire to refrain from
pursuing any lifesaving treatments if their current medical condition severely limits the likelihood of returning to their functional baseline in some of these aspects. Ideally, these goals of care discussions especially in the elderly should begin at the planning stage of administering acute stroke treatments, as often these treatments necessitate coadministration of other life-sustaining strategies such as mechanical ventilation, further requiring a variable amount of stay in a critical care unit as patients recover from their acute illness. As such, the discussions around risks versus benefits of acute stroke treatments should include a clear explanation of risks versus benefits of these additional life-sustaining measures that are necessary to obtain the maximum benefit from stroke treatment. While accurate prognostication regarding stroke recovery is prone to several uncertainties, a careful assessment of individual patient’s comorbidities, size of stroke, and eloquence of the affected brain tissue may allow for a more informed shared decision-making between the patients/families and the clinical teams. A sizable proportion of stroke patients suffer from dysphagia requiring artificial nutrition and hydration (ANH) through a nasogastric tube or PEG. In patients who cannot swallow safely, it is reasonable to offer a trial of a nasogastric tube for up to 2 to 3 weeks before considering a PEG. During this trial period, aggressive speech therapy making every effort at establishing the true extent of patients’ swallowing capabilities should be made and reviewed with the patient/their family members. Finally, in patients who have suffered a severe degree of illness limiting their ability to achieve their desired outcome and wishing to pursue end-of-life or comfort measures, a multidisciplinary team approach in consultation with palliative care expertise should be promptly offered.
Patients with a life expectancy of 6 months or less may be appropriate candidates for hospice. Depending on their health care needs, hospice care can be instituted in an inpatient setting or in appropriate context, even at home. Hospice care does not necessarily mean cessation of all medical treatments, but rather aims to offer comfort to the patient, prevent suffering, and focuses on the quality of life at the patient’s terminal stage of illness.
ACKNOWLEDGMENT
The authors utilized information contained in a similar chapter in the 7th edition of this textbook and wish to thank Erica Camargo, MD, PhD, MSC, Ming-Chieh Ding, MD, PhD, Eli Zimmerman, MD, and Scott Silverman, MD, for their contributions to that chapter.
Albers GW, Marks MP, Kemp S, et al. Thrombectomy for stroke at 6 to 16 hours with selection by perfusion imaging. N Engl J Med.
2018;378(8):708–718.
Berkhemer OA, Fransen PSS, Beumer D, et al. A randomized trial of intraarterial treatment for acute ischemic stroke. N Engl J Med.
2015;372:11–20.
Brott T, Bogousslavsky J. Treatment of acute ischemic stroke. N Engl J Med.
2000;343(10):710–722.
Brott TG, Brown RD Jr, Meyer FB, Miller DA, Cloft HJ, Sullivan TM. Carotid revascularization for prevention of stroke: carotid endarterectomy and carotid artery stenting. Mayo Clin Proc.
2004;79:1197–1208.
Connolly ES, Rabinstein AA, Carhuapoma JR, et al. Guidelines for the management of aneurysmal subarachnoid hemorrhage: a guideline for healthcare professionals from the American Heart Association/American Stroke Association. Stroke. 2012;43(6):1711–1737.
Goyal M, Demchuk AM, Menon BK, et al. Randomized assessment of rapid endovascular treatment of ischemic stroke. N Engl J Med.
2015;372:1019–1030.
Goyal M, Menon BK, van Zwam WH, et al. Endovascular thrombectomy after large-vessel ischaemic stroke: a meta-analysis of individual patient data from five randomised trials. Lancet. 2016;387(10029):1723–1731.
Hemphill JC 3rd, Greenberg SM, Anderson CS, et al. Guidelines for the Management of Spontaneous Intracerebral Hemorrhage: A Guideline for Healthcare Professionals From the American Heart Association/American Stroke Association. Stroke. 2015;46(7):2032– 2060.
Kern R, Ringleb PA, Hacke W, Mas JL, Hennerici MG. Stenting for carotid artery stenosis. Nat Clin Pract Neurol. 2007;3:212–220.
Kleindorfer DO, Towfighi A, Chaturvedi S, et al. 2021 Guideline for the Prevention of Stroke in Patients With Stroke and Transient Ischemic Attack: A Guideline From the American Heart Association/American Stroke Association. Stroke. 2021;52(7):e364–e467.
Nogueira RG, Jadhav AP, Haussen DC, et al. Thrombectomy 6 to 24 hours after stroke with a mismatch between deficit and infarct. N Engl J Med.
FURTHER READING
2018;378(1):11–21.
Powers WJ, Rabinstein AA, Ackerson T, et al. Guidelines for the Early Management of Patients With Acute Ischemic Stroke: 2019 Update to the 2018 Guidelines for the Early Management of Acute Ischemic Stroke: A Guideline for Healthcare Professionals From the American Heart Association/American Stroke Association. Stroke. 2019;50(12):e344– e418.
Chapter
Other Neurodegenerative Disorders
John Best, Howie Rosen, Victor Valcour, Bruce Miller
Alzheimer disease (AD) is the most common neurodegenerative disorder encountered by the practicing geriatrician; however, a sizable number of other neurodegenerative diseases will be seen in a typical practice, rendering a working knowledge of these disorders critical for clinicians. Furthermore, there is increasing evidence that with greater age, many individuals die with mixed pathology—with AD, vascular changes, frontotemporal lobar degeneration (FTLD), and Lewy body changes often seen in a single brain. This chapter provides an overview of the more common neurodegenerative disorders with emphasis on those that influence behavior and cognition early in the course. We begin by reviewing the clinical approach to neurodegenerative cognitive disorders and then review the clinical presentation, epidemiology, and examination findings of the more common neurodegenerative syndromes. We attempt to link clinical presentation to anatomy and neuropathology whenever possible.
APPROACH TO THE EVALUATION OF COGNITIVE AND BEHAVIORAL DISORDERS IN ADULTS
The evaluation of neurodegenerative disorders is multifaceted, requiring careful attention to the cognitive, behavioral, and motor history combined with a comprehensive neurologic examination aiming to identify the brain regions involved. Isolating anatomy in patients who present with slowly progressive neurodegenerative disorders greatly facilitates the determination of the correct diagnosis.
Emphasis should be placed on the earliest presenting symptoms, whether cognitive, behavioral, or motor in origin. These early features may be critical
to the identification of the pathologic substrate. As diseases progress, signs and symptoms merge between the different disorders, making diagnosis more difficult. An early history of repeated falls, for example, should warrant concern for progressive supranuclear palsy (PSP), vascular dementia, or Parkinson disease. This finding is valuable when it is present early in the illness, although most dementias are associated with basal ganglia involvement later in their disease course, diminishing the value of falls for diagnosis in the later stages. Likewise, inappropriate behavior and disinhibition are commonly seen in patients with advanced dementia syndromes, regardless of disease etiology; however, when these findings are a prominent presenting feature in the relative absence of amnestic symptoms, frontotemporal dementia (FTD) should be considered more likely.
Learning Objectives
Learn about the epidemiology, common clinical presentations, diagnosis, and treatment of non- Alzheimer type of neurodegenerative diseases.
Gain new knowledge about recent discoveries related to the genetics, pathology, and pathobiology of common tau- and α-synucleopathies in older adults.
Learn about the specific behavioral and nonbehavioral symptoms, clinical signs, diagnostic criteria, and common neuroimaging, genetic, and laboratory tests used to diagnose non-Alzheimer types of dementia.
Key Clinical Points
The evaluation of neurodegenerative disorders includes careful attention to the cognitive, behavioral, and motor symptoms along with a thorough neurologic examination.
In older age, it becomes increasingly common for there to be more than one type of pathology causing a dementia syndrome.
The characteristic features of Lewy body dementia include cognitive impairments with profound fluctuation, spontaneous
Understand the scientific rationale, indications, and limitations of currently available and emerging therapies for common non-Alzheimer type neurodegenerative diseases.
parkinsonism, rapid eye movement (REM) behavior sleep disorder, and visual hallucinations.
About 30% of patients with Parkinson disease develop dementia. In these patients, unlike Alzheimer disease (AD), the motor and other symptoms of Parkinson disease generally predate dementia by many years.
The dominance of behavioral and personality changes in the absence of memory and perceptual symptoms is highly suggestive of the behavioral variant of frontotemporal dementia (FTD).
History of falls and dysphagia with abnormalities in vertical gaze and preserved oculocephalic reflex is suggestive of a diagnosis of progressive supranuclear palsy (PSP).
Cognitive histories should be comprehensive and must include evaluation of memory, language, visuospatial function, executive functioning, behavior, and attention (Table 63-1). The comprehensive history should probe for autonomic symptoms and sleep patterns, with emphasis on symptoms associated with disorders of REM sleep behavior and sleep apneas.
TABLE 63-1 ■ ASSESSMENT OF MAJOR COGNITIVE DOMAINS IN NEURODEGENERATIVE DISORDERS
The assessment of behavioral symptoms can be particularly helpful and sometimes critical in non-Alzheimer neurodegenerative disorders.
Behavioral variant frontotemporal dementia (bvFTD) is the most common cause of dementia in patients younger than age 60. Behavioral symptoms or personality change are commonly the presenting symptoms of this disorder (see discussion later in this chapter). Recent research criteria for bvFTD require three of the following six symptoms to meet “possible” criteria: early disinhibition, early apathy, early loss of sympathy or empathy for others, early repetitive motor behaviors, early hyperorality, and deficits in frontal executive function with relative sparing of visuospatial abilities. For probable bvFTD, in addition to meeting the “possible” criteria just listed, frontotemporal atrophy or the presence of a known causative mutation is required. The emergence of predominant behavior and personality changes in the absence of episodic memory and perceptual complaints localizes disease to the frontal or anterior temporal lobes, although presence of memory loss does not rule out bvFTD.
The neurologic examination is a critical component to the assessment of neurodegenerative disorders and typically confirms the clinical impression obtained from the history. The motor examination identifies both pyramidal and extrapyramidal signs as well as features characteristic of FTD, dementia with Lewy bodies (DLB), corticobasal degeneration (CBD), and PSP. Examination of cranial nerves includes an assessment of eye movements and
range of gaze. Abnormalities in vertical gaze, either reduced amplitude or complete palsy, with a preserved oculocephalic reflex are a characteristic finding in PSP. Horizontal gaze abnormalities are more typical of CBD. Abnormalities in saccadic eye movements can be seen in several neurodegenerative disorders ranging from AD to PSP. Saccadic movements are tested by asking the patient to focus on an object in front of them (such as a pen tip held by the examiner) then to quickly refocus on an item in their peripheral field (such as the examiner’s finger) while the examiner watches carefully for saccadic latency (delay in initiation of movement), incomplete saccades (gaze palsy), and interrupted or jerky saccadic movement.
Saccades should be tested in all four directions. The examiner should test ocular pursuit by having the patient track an object, such as the examiner’s finger, in both horizontal and vertical directions.
The examination should probe for changes in cognition, behavior, and movement with emphasis upon the earliest abnormalities. It is important to realize that early symptoms reflect where the illness began, and this is usually helpful in the determination of disease etiology. Confirmatory imaging and laboratory work and standard laboratory tests to exclude treatable etiologies of cognitive impairment can then be completed (Table 63-2).
TABLE 63-2 ■ SOME LABORATORY TESTS COMMONLY USED IN THE EVALUATION OF COGNITIVE DISORDERS
DISORDERS ASSOCIATED WITH α-SYNUCLEIN DEPOSITION
Dementia With Lewy Bodies
The precise role of α-synuclein in health and disease is not fully understood. High concentrations of α-synuclein in synaptic regions suggest a role in synaptic plasticity. In DLB, α-synuclein accumulates with the brain stem, basal ganglia, and cortex, resulting in a progressive neurodegenerative cognitive, behavioral, and motor disorder. The prevalence of DLB is still uncertain, and the coassociation of Lewy body and AD pathology is extremely common in aging dementia populations. Indeed, even with classical AD associated with apolipoprotein E4 genotype, more than 50% of subjects show Lewy bodies, and genetic disorders that predispose to Lewy bodies often show the coassociation of Aβ-42. Based on autopsy studies, DLB, especially cooccurring DLB and AD, is often underdiagnosed during life.
The reported mean onset of disease is 75 years with a range from 50 to 80 years, and there is a slight predominance of DLB in men compared to women. The clinical and pathologic overlap between DLB and AD and with Parkinson disease dementia (PDD) is well recognized, resulting in some diagnostic challenges. Accurate diagnosis has clinical implications, as patients with DLB compared to AD tend to have a robust response to cholinesterase inhibitors, and patients with DLB often have sensitivity to neuroleptic medications.
The core features of DLB are cognitive impairment with profound fluctuations in attention and cognition, spontaneous parkinsonism, REM behavior sleep disorder (the physical enactment of dreams), and recurrent well-formed visual hallucinations. The typical neuropsychological profile differs somewhat from that of AD. DLB often involves early executive dysfunction and, when present, more severe visuospatial dysfunction. In DLB there is typically better performance on episodic memory tasks, and recognition memory when compared to AD patients. Invariably, memory becomes impaired over time in both disorders. Combined cortical and subcortical deficits are common, and deficits in attention are a core feature of the disease.
The degree of cognitive fluctuation can be so profound as to affect Mini Mental State Examination (MMSE) scores up to 50% from day-to-day, and
many patients repeatedly move in and out of delirium. Unfortunately, eliciting a history of fluctuation can be difficult, and this symptom may not be well described by caregivers. Clinicians should consider several different approaches, including questions focused on marked alteration in attention, staring spells, daytime sleepiness, and episodes of incoherent speech. On occasion, the fluctuation can be so severe as to result in emergency room evaluation for a transient ischemic attack (TIA) or delirium. Several structured scales exist to assist in assessment of fluctuation, including the One Day Fluctuation Assessment Scale and the Clinical Assessment of Fluctuation.
A common feature of neurodegenerative disorders with α-synuclein pathology is REM sleep behavior disorder (RBD), defined as vivid and often frightening dreams that are frequently acted out verbally or motorically. RBD occurs in 85% of DLB cases, compared to 15% of Parkinson disease patients and 60% of patients with multiple system atrophy (MSA). It is uncommon in other forms of dementia where α-synuclein pathology is absent. In the DLB diagnostic criteria, RBD is considered a core feature. Autonomic dysfunction is also common and can be profound with repeat syncope and unexplained loss of consciousness. When REM behavior occurs in association with severe autonomic symptoms, MSA rather than DLB should also be considered. Significant depressive symptoms occur in up to 40% of cases and can often precede other symptoms by years.
About two-thirds of DLB cases will exhibit visual hallucinations, misperceptions or, less frequently, delusional misidentification. When delusional misidentification, such as mistaking the spouse, friend, or relative as an impostor, occurs as the first symptom of a dementia, DLB is highly likely. The visual hallucinations of DLB can be vivid with distinct colors and the inclusion of human figures and animals. In contrast to the hallucinations sometimes seen with more advanced AD, visual hallucinations occur early in DLB. In one pathology-based study, visual hallucinations corresponded to a greater number of Lewy bodies in the anterior/inferior temporal lobes and to larger deficits in acetylcholine. Visual hallucinations often respond to boosting brain acetylcholine with cholinesterase inhibitors.
Spontaneous parkinsonism is a hallmark feature of DLB, eventually occurring in up to 70% of cases. Common findings include bradykinesia, axial and appendicular rigidity, postural instability, slowed response times, and hypomimia. In contrast to Parkinson disease, the parkinsonism of DLB is
more commonly bilateral and less frequently includes tremor. Parkinsonism rarely occurs in isolation as an early finding in DLB although early features of the cognitive syndrome are commonly overlooked. Response to levodopa treatment is less frequent than the response seen in Parkinson disease; however, efficacy data may be biased. Clinicians are more likely to avoid such drugs in DLB patients for fear of adverse effects such as orthostasis or aggravations of hallucinations.
Parkinsonism contributes substantially to the disability in DLB and alters the clinical course. Mean survival among postmortem confirmed cases of DLB is 10 years, with a rate of disability at approximately 10% per year, often exceeding that of Parkinson disease. Cases of rapid progression to death in 1 to 2 years have been described. Risk factors for higher mortality include older age, hallucinations, greater degrees of fluctuation, and neuroleptic sensitivity.
Diagnostic criteria for DLB assist in identification of disease and are particularly useful for research purposes (Table 63-3). In the revised schema, core features include prominent fluctuation, recurrent visual hallucinations, RBD, and spontaneous features of parkinsonism such as rigidity, bradykinesia, and hypomimia. Supportive and suggestive features are also defined. To meet diagnostic criteria for probable disease, two core features or the combination of one core feature and one supportive feature are required. If one core feature without any suggestive features, or suggestive features are present in the absence of any core features, the term “possible DLB” is used.
TABLE 63-3 ■ MCKEITH CRITERIA FOR DLBA
The pathologic hallmark of DLB is the presence of neuronal spherical intracytoplasmic inclusions of α-synuclein, termed Lewy bodies (Figure 63- 1). The Lewy bodies seen in DLB are very similar in appearance to those that are seen in Parkinson disease; however, in DLB, the distribution extends
beyond the substantia nigra and locus coeruleus to involve the neocortex and limbic system. Cortical Lewy bodies lack the typical dense core with pale halo appearance that is seen in Parkinson disease. Other intraneuronal aggregates of α-synuclein (Lewy neurites), cortical senile plaques, and sparse tau pathology are all described in DLB. As described above, AD copathology is frequently encountered.
FIGURE 63-1. Lewy bodies (LB) seen with hematoxylin and eosin staining (left) and α- synuclein staining (right).
Currently, imaging studies do not add substantially to differentiating DLB from other neurodegenerative disorders and are only included as supportive features in diagnostic criteria. Structural magnetic resonance imaging (MRI) often identifies less atrophy of the medial temporal lobes in cohorts of DLB compared to AD and variably identifies greater atrophy in the basal ganglia structures and the dorsal midbrain. In group studies, single-photon emission
computed tomography (SPECT) with 99mTc-hexamethylpropyleneamine oxime (HMPAO) tends to show decreased regional cortical brain activity in
parietal-occipital regions of DLB compared to AD patients. This is relative sparing of the posterior cingulate cortical perfusion in DLB compared to the occipital cortex, a highly specific finding called the cingulate island sign.
DLB patients exhibit decreased dopamine transport in the putamen and caudate by dopaminergic SPECT and decreased postganglionic sympathetic cardiac innervation by I-metaiodobenzyl guanindine (MIBG) SPECT imaging. The modest performance characteristics of these tests limits their clinical utility.
The clinical course of DLB is often faster than that seen in AD and a drop of four to five points on MMSE per year is common (in contrast to three points per year, which is typical for AD). Patients with DLB should be tried on cholinesterase inhibitors. Often this results in responses that exceed those seen in AD, including frequent reduction or elimination of hallucinations.
Social stimulation and physical exercise to maximize balance and strength should be recommended. Symptomatic treatment with levodopa can be used, if indicated, for parkinsonian symptoms once fluctuation and visual hallucinations are stabilized with a cholinesterase inhibitor. Patients should be advised to avoid anticholinergic medications, including many common over-the-counter cold remedies. RBD, if severe, can be treated with melatonin or clonazepam. Atypical neuroleptic medications should be cautiously considered only if intolerable behavioral disturbances emerge.
Parkinson Disease Dementia
The temporal relationship between the onset of dementia and the development of parkinsonism is the primary clinical feature distinguishing DLB from PDD. In PDD, cognitive deterioration occurs in well-established Parkinson disease, typically years to decades after motor systems are identified. In contrast, research criteria for DLB require cognitive symptoms that predate parkinsonism by 12 months; although this “1-year rule” is often difficult to apply in the clinical setting. This temporal distinction is arbitrary, and many believe that the two disorders represent different points in the spectrum of the same disease, and abnormalities in α-synuclein accumulation underlie both disorders.
Approximately one-third of older Parkinson disease patients will develop sufficient cognitive symptoms during the course of their illness to impair function. Indeed, the diagnosis of Parkinson disease puts a patient at high risk for mild cognitive impairment (MCI) and then PDD following in
many patients. In pure PDD, the prominent findings include motor and psychomotor slowing, decreased response times, and alterations in concentration and attention. The recognition that medications used to treat Parkinson disease can affect cognition and the understanding that, because of age, a substantial number of Parkinson patients will develop concurrent AD result in a scenario where cognitive symptoms in this population can be multifactorial. Possible predictors for dementia in Parkinson disease include older age at onset of motor symptoms, bradykinesia, nontremor prominent Parkinson disease, bilateral onset of motor signs, and declining response to levodopa. Depression and visual hallucinations may increase the risk as well. Patients with DLB typically exhibit a faster clinical decline than do patients with PDD.
Multiple System Atrophy
MSA is a sporadic disease marked by degeneration of multiple neurologic systems and resulting in relentlessly progressive clinical course. Death typically occurs within 6 to 10 years with an estimated 10-year survival of 40%. Relatively infrequent, the incidence is estimated to be 0.6/100,000 person-years or between 1.86 and 4.9/100,000 population. The incidence increases to 6/100,000 person-years among patients older than 50 years. The mean age of onset is 54. While currently speculated to have a combination of environmental and genetic predispositions, neither has been definitively established. Epidemiologic studies suggest a potential risk associated with exposure to pesticides. MSA is about three times more frequent in men than women and limited epidemiologic data suggest that it may be less frequent among smokers.
The term MSA was first used in 1969 and encompasses diseases previously labeled as striatonigral degeneration (now termed MSA-P for parkinsonism), Shy-Drager syndrome, and olivopontocerebellar degeneration (now termed MSA-C for cerebellar). The MSA subtypes are indicative of the predominant neurologic component involved. Patients with prominent orthostatic features tend to have decreased survival compared to other subtypes. Several derivations of diagnostic consensus statements exist with mixed sensitivity.
The presenting symptoms of MSA are variable and include cerebellar findings, autonomic failure, pyramidal findings, and parkinsonism.
Parkinsonism (MSA-P) is prominent in most cases (80%) with cerebellar
symptoms (MSA-C) more prominent in about 20% of cases. The more common presenting parkinsonian features are akinesia, rigidity, and postural (rather than resting) tremor. Autonomic symptoms precede motor symptoms in most cases. Gait instability is common but falls occur less frequently in early stages than in PSP. In early stages, MSA can mimic Parkinson disease but more symptoms evolve within 5 years. Gait ataxia, limb kinetic ataxia, and dysarthria are the more frequent presenting symptoms in patients with cerebellar-dominant MSA. The dysarthria caused by cerebellar dysfunction has a characteristic appearance of jerky, intermittently explosive, and slurred output often associated with poor separation of syllables.
Autonomic symptoms include orthostasis, erectile dysfunction, constipation, and urinary incontinence. While orthostasis is common, syncope occurs infrequently. Diagnostic criteria for the orthostatic component of MSA require a 30 mm Hg drop in systolic blood pressure; although in practice, a 20 mm Hg drop in systolic blood pressure or 10 mm Hg drop in diastolic blood pressure is thought to be significant in the absence of appropriate increased heart rate. Occurrence of symptoms suggestive of MSA in an individual younger than 30 years or in an individual with a family history of similar disease should raise suspicion for alternative diagnoses.
Most MSA patients complain of sleep problems, commonly sleep fragmentation (53%), early waking (33%), and insomnia (20%). As with other synucleinopathies, RBD is common, occurring in up to 60% of MSA patients. Nocturnal stridor and obstructive sleep apnea are also frequent in MSA. Stridor is associated with decreased survival and risk of sudden death.
Historically, MSA was characterized as a primary movement disorder without significant cognitive impairment. The current consensus criteria for MSA considered dementia as a nonsupporting feature. There is emerging evidence of higher prevalence of variable cognitive impairment, typically later in the clinical course. The pattern of cognitive impairment is most commonly frontal-executive dysfunction, likely secondary to deafferentation of frontostriatal neural pathways. Disruption of cerebellocortical circuitry in MSA-C can also lead to impaired attention, visuospatial function, and affect regulation.
Brain MRI changes occur in some patients but lack sensitivity and specificity for definitive diagnosis. These findings include hypointensity and atrophy of the putamen on T2-weighted images with a slit-like marginal
hypointensity just lateral to the putamen on axial images. A characteristic “hot cross bun” sign has been described in the pons and middle cerebral peduncles thought to be caused by degenerative changes in pontocerebellar fibers. This finding lacks sensitivity and is sometimes seen in Parkinson disease, limiting specificity. Atrophy of the brain stem, middle cerebellar peduncles, and cerebellum may also be seen. Volumetric analyses of the striatum and brain stem, where atrophy is very severe in MSA but not in simple Parkinson disease, demonstrate some promise in discriminating these two disorders. Electromyography (EMG) abnormalities at the anal sphincter can be seen in MSA, although the clinical utility for distinguishing MSA from Parkinson disease in early stages of disease has not been established.
The hallmark pathology of MSA is cell loss, gliosis, and glial inclusions in multiple neurologic systems including the spinal cord and cortex.
Prominent abnormalities are also seen in the basal ganglia, substantia nigra, and olivopontocerebellar pathways, as suggested by the terminology, with anatomy reflecting symptoms. On gross inspection, the putamen is shrunken and displays a green-gray discoloration that can appear cribriform when disease is severe. As with DLB and Parkinson disease, there is an accumulation of α-synuclein as half-moon–, oval-, or conical-shaped argyrophilic glial cytoplasmic inclusions.
Treatment is generally symptomatic with about one-third of patients responding to levodopa in the early stages of the disease. Patients should be encouraged to sleep in a lateral decubitus rather than supine position to minimize airway obstruction. Use of positive airway pressure devices should be considered. Invasive means of controlling stridor and apneas have included tracheotomy but should be approached cautiously within the full context of ethical and quality-of-life considerations. RBD may respond to melatonin or low-dose clonazepam at night. Labile blood pressures may develop and should raise concern when symptomatic. Avoiding alcohol, heavy meals, or straining at micturition and defecation is recommended.
Elastic stockings and elevating the head of the bed may provide some relief. Pharmacologic approaches designed to increase sympathetic tone (midodrine) or volume expansion (fludrocortisone) may be required. Patients with MSA-C tend to maintain function for a longer time than those with
MSA-P.
NEURODEGENERATIVE DISORDERS ASSOCIATED WITH TAU, TDP-43, OR FUS PATHOLOGY
Frontotemporal Lobar Degeneration Syndromes
FTLD is the term used to capture a neuropathologically linked group of non- AD dementing conditions associated with frontotemporal and basal ganglia pathology. FTD subsumes three clinical syndromes, bvFTD, semantic variant primary progressive aphasia (svPPA), and nonfluent variant primary progressive aphasia (nfvPPA). Recent evidence indicates that FTLD is closely related to several other neurodegenerative disorders: CBD, PSP, and motor neuron disease (Figure 63-2).
FIGURE 63-2. Frontotemporal lobar degeneration (FTLD) and potential associations to other neurodegenerative syndromes.
The core anatomic feature of FTLD is the focal, often asymmetric cortical degeneration of frontal and anterior temporal regions with general sparing of posterior cortical structures. The resultant brain atrophy can be severe, sometimes described as “knife-edge,” and the brain can weigh as little as 750 g at autopsy (Figure 63-3). Patients present with a primary behavioral or language deficit that typically corresponds to the region of greatest brain atrophy and dysfunction. Patients presenting with primary behavioral deficits are classified as suffering with the behavioral variant of FTD (bvFTD). bvFTD overlaps considerably with the two syndromes that
present with predominant language deficits: the semantic and nonfluent variants of primary progressive aphasia (svPPA and nfvPPA). In most centers bvFTD accounts for more than 50% of all cases with the others divided between svPPA and nfvPPA. The neuropathologic basis for nearly one-half of these diseases is the abnormal accumulation of tau protein with a similar percentage associated with abnormal aggregates of TDP-43. Less than 10% show aggregates of the fused in sarcoma (FUS) protein. Many FTD patients are misdiagnosed as having AD during life. In addition to changes in personality and behavior, features that should alert physicians to the possibility of FTD include early abnormalities in social conduct, loss of sympathy and empathy for others, repetitive motor behaviors, or hyperorality. When cognitive testing is completed, suspicion should be raised when abnormalities in executive functioning occur in the absence of prominent amnestic complaints or cognitive features localizing to posterior structures, such as problems with calculations and visuospatial tasks.
FIGURE 63-3. T2-weighted axial (right) and T1-weighted coronal (left) brain magnetic resonance imaging (MRI) in a 59-year-old woman with pathology-confirmed Pick disease.
Patients who present with isolated language symptoms that exist for at least 2 years in the initial stages of cognitive decline are termed to have primary progressive aphasia (PPA). As a group, patients with PPA have greater atrophy on the left than the right of the perisylvian region, the anterior
temporal lobes, and the basal ganglia. Inferior parietal lobule atrophy has also been described in PPA and is associated with predominantly word- finding deficits. Called logopenic aphasia by Gorno-Tempini and colleagues, this form of PPA is usually due to underlying AD pathology. As the syndrome progresses, patients presenting with nfvPPA go on to develop features of PSP or CBD or even amyotrophic lateral sclerosis (ALS). Thus, careful attention to the development of motor symptoms is necessary to help with the prediction of the underlying pathology. The emergence of artistic behavior has been described in some patients with primary language difficulty.
Efforts to distinguish the various PPA syndromes from each other may have clinical ramifications. Clinical, genetic, and imaging studies have indicated that the logopenic variety of PPA is usually a language presentation of AD. As the term implies, these patients have decreased speech output.
They typically have slow speech with impaired syntactic comprehension and naming. lvPPA is associated with short-term phonologic memory deficits.
These patients exhibit profound echoic memory deficits manifested as forgetting portions of longer phrases during repetition tasks and being able to partially paraphrase but not exactly repeat such phrases. If confirmed by imaging or other biomarkers to be AD in etiology, these patients may be amenable to treatment with cholinesterase inhibitors or other emerging treatment strategies.
Nonfluent Variant Primary Progressive Aphasia
Patients with nfvPPA typically have apraxic, labored speech with errors in grammar, and difficulty with more complex syntax. Speech apraxia is the inability to produce speech caused by difficulty in programming the sensorimotor commands for the positioning and movement of muscles used to produce speech. This leads to difficulty with initiation of speech, sound substitutions, omissions, transpositions of syllables, and a slower rate of expressing sentences punctuated with inappropriate starts and stops. Patients may sound as if they are stuttering or having trouble enunciating, or otherwise mispronouncing words. The distortions can include sounds not present in their native language and the errors produced in apraxic speech are typically inconsistent. The problems are usually worse for multisyllabic words.
Speech apraxia is localized to the left opercular and anterior insular region.
Anomia is variably present and phonemic paraphasias are frequently observed. Speech output is decreased and dropped words, often articles such
as “the,” occur. In patients with nfvPPA, sentences tend to have more nouns than verbs. In contrast to patients with svPPA, these patients maintain single word comprehension and semantic knowledge. Thus, nfvPPA is not likely in a patient who has preserved articulation, preserved grammar, and deficits in semantic knowledge. Insight into the condition can be exquisitely preserved with nfvPPA, and patients often develop depression. Other behavioral problems are uncommon in early stages of disease.
MRI studies of nfvPPA have revealed atrophy that is asymmetrically left- dominant with preferential involvement of the inferior frontal lobe, the insular region, and the caudate. Hypometabolism of the left frontal region can be observed on fluorodeoxyglucose positron emission tomography (FDG- PET). Most patients show tau pathology at autopsy with CBD being most common, followed by PSP and Pick changes. Approximately 20% show TDP pathology, usually TDP-43 type A.
Semantic Variant Primary Progressive Aphasia
In contrast to patients with nfvPPA, patients with svPPA have fluent speech that is grammatically accurate, but they exhibit the hallmark loss of semantic knowledge. Semantic memory is the encyclopedic knowledge of people, objects, facts, and words. Unlike episodic memories, individuals are not aware of where or when they learned these facts. Eventually individuals with svPPA lose all knowledge about the fact or word and exhibit features of a multimodality agnosia. Therefore, even when they are provided with the name of the object, the object is not recognized. Early in the disease, patients are often aware of word-finding difficulties and can also be aware of comprehension difficulties (eg, acknowledging that they don’t recognize a word they should know). Semantic paraphasias are frequent, with supraordinate substitutions of words and common use of nonspecific grouping words such as “stuff” and “things.” Repetition and prosody are preserved, as are syntax and verb recognition.
Patients with svPPA display surface dyslexia manifest by difficulty pronouncing irregularly spelled words, such as “gnat,” “heir,” or “pint,” when pronunciation does not follow standard phonologic rules. On neuropsychological testing, patients with svPPA have difficulty with category fluency and confrontational naming, particularly with low-frequency words. More recently, a right hemispheric variant of svPPA has been described.
These patients have difficulty naming and recognizing famous people and
develop a number of behavioral symptoms similar to bvFTD. In contrast to AD, many svPPA patients have greater deficits for remote compared to recent memory, when memory deficits become involved.
Of all the PPA syndromes, svPPA has the greatest propensity to include behavioral problems, which are included as supportive evidence in diagnostic criteria. As with FTD, the behavioral issues seen with svPPA often involve hyperorality, disinhibition, and aberrant motor behavior.
Diagnostic criteria include behaviors such as loss of sympathy and empathy, narrowed preoccupations, and parsimony (excessive frugality, stinginess).
Rigidity in thinking can be striking. svPPA patients can develop compulsions and have been described to have visual hypervigilance, such as recognizing that a hair is subtly out of place on an examiner or quickly seeing a coin on the street. They may exhibit difficulty in the interpretation of emotions, particularly negative emotions such as sadness, anger, and fear. Emergence of behavioral symptoms correlates with duration of illness but can also be an early finding. The presence of early behavioral issues in a patient with a PPA syndrome should alert to the possibility of svPPA.
Anatomic studies may explain why svPPA patients often exhibit behavioral abnormalities. The disorder begins in the amygdala and anterior temporal lobes. When it starts on the left side, language deficits predominate, while right-sided presentations are characterized by loss of empathy for others or deficits in the recognition of familiar people. These patients sometimes meet research criteria for bvFTD while others are more typical of svPPA. While svPPA begins in the anterior left temporal lobe, it typically spreads anteriorly to involve the frontal regions. Eventually it spreads to the right medial frontal lobe, the right orbitofrontal lobes, and the right insular region, areas associated with behaviors such as disinhibition and apathy. The absence of behavioral problems in the logopenic variety of PPA and in nfvPPA is likely due to the general sparing of these same structures in those diseases.
Patients with svPPA often have TDP-43 type C–positive, tau-negative inclusions at autopsy. The etiology for svPPA and for TDP type C aggregates is unknown. Rarely are these cases familial. Recent work suggests that a significant subset of these patients have a history of autoimmunity and increased levels of tumor necrosis factor (TNF) in the serum.
Behavioral Variant Frontotemporal Dementia
bvFTD presents with the insidious onset of change in personality and inappropriate behaviors. The mean age of onset is in the mid-fifties. Most commonly, symptoms include disinhibition, poor impulse control, loss of sympathy or empathy for others, overeating, compulsive behaviors, and deficits in executive control or multitasking (Table 63-4). Patients can develop stereotyped behaviors, defined as repetitive, invariant behaviors that lack purpose. Examples include counting, pacing, organizing, or the repetitive use of catch phrases. Socially inappropriate activities can include shoplifting and other criminal behavior, public urination, offensive speech, and public masturbation. Perseveration is common. Cravings for sweets are often observed. When associated with the decreased sense of satiety that can occur, large, unhealthy weight gain results. Hyperorality and oral exploratory behavior, similar to human Klüver-Bucy syndrome, can occur.
TABLE 63-4 ■ SYMPTOMS IN BEHAVIORAL VARIANT OF FRONTOTEMPORAL DEMENTIA
Patients with bvFTD are often misdiagnosed as having psychiatric illness. In other instances, any patient with an atypical dementia syndrome is considered to have bvFTD. Delusions can occur, often with bizarre or grandiose overtones. Patients exhibit lack of empathy and can have a cold, blunted effect. When the anterior cingulate and medial frontal lobes are involved, apathy can be particularly prominent, and some degree of apathy, especially in later stages, is almost universal. Some patients undergo large changes in their beliefs and attitudes, including religious sentiments. In contrast to AD, depression is uncommon in bvFTD.
Neuropsychological testing demonstrates abnormalities in executive functioning, working memory, and social cognition with general sparing of visuospatial skills and verbal memory. Consequently, the MMSE score can be quite high even among patients with marked functional disability. BvFTD patients also display problems with set-shifting, concept formation, and abstract reasoning. They may demonstrate disinhibition, impulsivity, and
poor judgment during testing. These behaviors can falsely lower verbal memory scores. When closely scrutinized, poor scores on such tests are accompanied by frequent intrusions of novel words and endorsement words that were not part of the original list learned (false positives on recognition testing). Although poor performance on executive function tasks is a feature bvFTD, it may not be present in early cases and can also be a feature of AD and other dementias, even early in the course. Thus, executive function tests should not be considered mandatory for a diagnosis of bvFTD, and when present it should not be the main reason for making a diagnosis of bvFTD. Rather, it should only be used to support a diagnosis when other, behavioral features of bvFTD are present.
Behavioral symptoms are a common late finding in most dementia syndromes. Thus, the emergence of behavior and personality symptoms in a patient with well-established dementia should be looked upon with caution when considering a change in diagnosis to bvFTD. The importance of the first symptom (often the presenting symptom) in the evaluation of patients with neurodegenerative disorders cannot be overstated.
Patients with bvFTD will typically have profound, usually bilateral, frontal, anterior insular, and anterior temporal lobe atrophy. Patients with greater atrophy on the right frontal lobe than left have more severe behavioral symptoms. Stages of atrophy have been described with the earliest stage involving only mild atrophy of the orbital and superior medial frontal lobes and hippocampus. As the disease progresses, the anterior frontal and temporal cortices and basal ganglia are increasingly involved.
The severity of atrophy increases as disease advances. William Seeley has demonstrated that in many instances, the first neurons afflicted in bvFTD are von Economo neurons that sit in the anterior insular and cingulate cortex.
These neurons are a unique feature in the brains of humans, other primates, and a few other species of mammals. SPECT and FDG-PET techniques have been used to differentiate FTD from AD with PET receiving Food and Drug Administration (FDA) approval for this indication. Both SPECT and FDG- PET demonstrate bilateral frontal hypometabolism/hypoperfusion in patients with FTD. Amyloid imaging is extremely valuable in separating AD from bvFTD, particularly in patients under the age 70 in whom the presence amyloid is not highly prevalent.
Pick Disease
Pick disease is a pathologic diagnosis and occurs in approximately 20% of clinical cases presenting with signs and symptoms of bvFTD. Less commonly it presents as nfvPPA. First described in 1892 by a German neurologist, Arnold Pick, the pathologic hallmark of Pick disease is argyrophilic cellular inclusions known as Pick bodies and swollen achromatic tau-positive neurons termed Pick cells. This is invariably associated with loss of large pyramidal neurons, resulting in a spongiform histologic appearance with selective atrophy of the frontal and anterior temporal lobes. Pick bodies are localized to the limbic cortex, paralimbic cortex, and predominantly the ventral aspect of the temporal lobe. The pre- and postcentral gyri are notably spared. The largest concentration of Pick bodies is found in the hippocampus and amygdala. Pick bodies are composed of randomly arranged tau filaments. In AD, tau pathology (neurofibrillary tangles) can spare the dentate gyrus; however, in Pick disease, this region is heavily involved.
Other FTLD Neuropathologies
FTLD is associated with tau pathology in about half of cases. The tau gene product has six isoforms, half of which result in three microtubule-binding repeats (3Rtau) and the other half result in four microtubule repeats (4Rtau). Pick disease is usually associated with 3Rtau. PSP and CBD, in contrast, are associated with 4Rtau. The clinical features and neuropathology of PSP and CBD are described later in this chapter.
Among FTLD cases that do not stain for tau protein, the histopathology will typically indicate a variable pattern of neuronal loss and gliosis with the presence of TAR DNA binding protein 43 (TDP-43) inclusions. TDP-43 neuropathology has been subdivided into four types, TDP-A to TDP-D, depending on the pattern of the TDP inclusions within the nucleus and cytoplasm. TDP type A is often associated with mutations in the progranulin gene, although sporadic cases have been seen. Type B is often associated with motor neuron disease, and the most common genetic mutation associated with familial FTD-ALS, C9orf72, often shows type B changes. TDP-C is the pattern associated with svPPA, while type D is seen with the rare FTD- causing mutation in valosin.
The Genetics of FTD
Our understanding of the genetics of FTLD is evolving. In 1998 it was discovered that a mutation in the microtubule-associated protein tau (MAPT)
gene was responsible for a familial form of FTD associated with Parkinson disease (FTDP-17). This mutation appears to be more common in Southern Europe and France than in Northern Europe, and a few mutations have been reported in China and Japan. The disease onset with mutations is variable with several variants presenting in the third of fourth decade. Mean disease onset is approximately 52 years. The mechanism for disease pathogenesis with tau mutations appears to be variable. In some cases, a mutation in an intron adjacent to exon 10 leads to an excess of the 4R form of tau with abnormal aggregation of tau. In other mutations, abnormal microtubule- binding or excessive formation of oligomers appears to be the mechanism for neurodegeneration.
A mutation in the progranulin (GRN) gene (only 1.7 Mb away from the tau gene) was identified in 2006. This mutation appears to account for up to 11% of sporadic and 25% of familial FTD cases. Disease onset tends to be later than with tau, and approximately 10% of mutation carriers remain asymptomatic after age 70. There are mutations that modify progranulin expression that may be partially responsible for this clinical variability. The onset of disease can range from 35 to 80 years. The phenotype is more variable than with MAPT mutations, and syndromes vary from bvFTD, nfvPPA, CBD, and Parkinson disease to AD. Asymmetry is common, sometimes with one hemisphere being massively atrophic and the other relatively normal. The pathophysiologic mechanisms associated with GRN mutations are also debated; loss of neuronal growth attributed to low levels of the progranulin protein, excessive inflammation associated with low progranulin levels, and a relative increase in the granulin proteins cleaved from progranulin are all possible. All these hypothesized mechanisms probably contribute to illness. Autoimmunity is seen in more than 10% of mutation carriers and can precede neurologic disease.
In 2011 a mutation in the C9orf72 gene was discovered as the major cause for familial forms of FTD, ALS, and FTD-ALS. A large expansion of a hexanucleotide repeat in the intron of C9 leads to overproduction of RNA and the production of dipeptides generated from the RNA in a non-ATG manner. The clinical onset is typically in the sixth decade but patients in the fourth and eighth decades have been described. The illness can begin as a psychiatric illness with highly variable symptoms ranging from borderline personality disorder, bipolar illness, depression, and conversion disorder to drug addiction. Similarly, whether a mutation carrier is diagnosed with
bvFTD or ALS is also variable, and in some instances, both syndromes emerge together. The mechanism for disease may reflect RNA-mediated neurodegeneration caused by large nuclear RNA aggregates that interfere with nuclear functions, the toxic effects of dipeptides, and possibly gene haploinsufficiency.
Other isolated cases of FTLD-related mutations have been described, including in the valosin-containing protein gene (VCP), and charged multivesicular body protein 2B (CHMP 2B) genes. Patients with the VCP mutation develop a rare disease with inclusion body myopathy, early Paget’s disease of bone, and FTD (IBM-PDB-FTD). Additionally, predominantly ALS but sometimes bvFTD phenotypes can occur with mutations in the TDP- 43 and FUS proteins. The list of mutations continues to grow, but thus far most of the more recently discovered mutations account for a small proportion of mutation related FTLD.
Treatment of FTLD
Treatment approaches for FTD and related disorders are generally aimed at symptom management, as there are not yet any effective disease-modifying therapies. Selective serotonin reuptake inhibitor (SSRI) medications may be beneficial for behavioral symptoms, including carbohydrate craving and compulsions. If delusions are problematic, atypical neuroleptics can be considered. There is no theoretical basis for the use of cholinesterase inhibitors, which can aggravate agitation. Attention to caregiver issues is important as the stressors often differ from those seen in typical AD and can be particularly burdensome. The different social and behavioral issues presented to FTLD compared to AD caregivers and the younger age of both patients and caregivers in FTLD can lead caregivers to feel isolated in support groups focused on typical AD.
Disease-modifying approaches are being investigated. Tau-lowering strategies are emerging for tau forms of FTD including tau mutations, Pick disease, PSP, and CBD. Monoclonal antibodies that target tau appear to show efficacy in animal models and have potential to diminish the brain’s tau burden. Several other approaches are being investigated that could be effective for treatment of FTLD associated with TDP-43 inclusions, including sporadic and mutation-related disease.
AMYOTROPHIC LATERAL SCLEROSIS (LOU GEHRIG DISEASE)
ALS was first identified in 1869 by the French neurologist, Jean-Martin Charcot who described a progressive neurodegenerative disease with mixed upper and lower motor neuron signs. The disease’s name is derived from myelin pallor identified in the lateral aspects of the spinal cord representing axonal degeneration from upper motor neurons as they descend to the limbs. In the United States, the disease is best known as Lou Gehrig disease, named after the famous baseball player who died of the disease in 1941. It occurs with an incidence of approximately 1 to 2/100,000 patients each year with a nearly 2:1 male-to-female predominance. About 5000 people develop ALS annually. As with FTLD, peak incidence occurs in midlife; however, disease can be seen in advancing ages.
Patients with ALS typically present with complaints of weakness in one or more limbs, resulting in unexplained tripping or dropping of items.
Clumsy fine finger movements lead to difficulty with tasks such as buttoning clothes or writing. Patients often complain of cramping. Bulbar symptoms such as speech slurring, difficulties with swallowing, and hoarseness occur in many patients. Bulbar symptoms are presenting symptoms in about 25% of cases and more commonly occur in older patients. Patients may complain of difficulty chewing or swallowing. Up to 45% of patients will develop pseudobulbar effect characterized by episodes of uncontrolled laughter or crying, often in inappropriate settings. Many patients have symptoms of a slowly progressive behavioral syndrome including apathy, lack of insight, and loss of empathy.
The neurologic examination identifies both upper and lower motor neuron abnormalities. Muscle atrophy is seen, usually in the hands and often noted at the thenar or hypothenar eminence. Fasciculations and weakness are noted as lower motor neuron findings. Upper motor neuron findings include spasticity, hyperreflexia, and abnormal plantar responses. Much of the diagnostic work-up is aimed at ruling out alternative etiologies and EMG is diagnostic for ALS.
The clinical course is relentlessly progressive with only a 50% survival at 3 years, with aspiration or respiratory failure the most common cause of death. Mild muscle weakness progresses to inability to walk, difficulty with speaking, and dysphagia. The FDA has approved riluzole for treatment of
ALS, which extends survival and delays the need for ventilation support. Treatment is otherwise directed at symptom control and quality of life. This is best achieved with a multidisciplinary approach, including ancillary services such as physical therapy, respiratory therapy, and social work.
The neuropathology of ALS includes neuronal cytoplasmic inclusions of a ubiquitinated protein, most commonly seen in lower motor neurons. These inclusions were shown to contain the 43-kDa TAR DNA-binding protein (TDP-43), the same protein that has been identified in FTLD-ALS or pure FTLD. This finding is consistent with previously recognized associations between ALS and FTD. The highly penetrant SOD1 mutation was previously thought to be the main cause of ALS prior to discovery of the C9orf72 mutation. About 15% of patients with FTLD develop ALS. Similarly, about 50% of ALS patients develop cognitive and behavioral symptoms of FTLD, and a smaller percentage of these develop dementia.
CORTICOBASAL DEGENERATION
CBD is a tauopathy that was initially described in patients with dementia, apraxia, cortical sensory deficits, and asymmetric parkinsonism with a rigid akinetic arm. It is now recognized that the manifestations of CBD range from bvFTD, nfvPPA, executive and motor deficits to the asymmetric parkinsonian syndrome originally described. Often a bvFTD or nfvPPA syndrome is present for many years before the onset of motor symptoms. The mean age of disease onset for CBD is in the mid-sixties. A few series suggest that women may be more commonly affected than men. CBD is generally sporadic in occurrence, although familial cases have been described in association with both tau and progranulin mutations. Many cases are not correctly diagnosed during life.
When parkinsonian features emerge, they are often, but not always, asymmetric. Severe upper limb dystonia can result in internal contracture of the limb with the fingers clutching the thumb with flexion at the wrist. An alien limb phenomenon sometimes occurs with the limb not only levitating, but often hooking onto clothing or grabbing other body parts and behaving as if it were no longer under the control of the patient. On occasion, the alien limb will interfere with actions of the other hand. Simple levitation of the lower extremity is described as well but is less specific to CBD. Focal reflex myoclonus that is typically present first in the fingers, then in the hand, can be elicited with distal percussion.
Concurrent cortical sensory loss is seen without deficits to peripheral sensory modalities. This is identified by testing for agraphesthesia, agnosia, or problems with two-point discrimination. Bilateral limb apraxia is common, and apraxia of opening or closing the eyes can occur. Close evaluation of eye movements will often reveal saccadic latency with normal velocity, often in the horizontal plane. Supranuclear vertical gaze palsy can occur.
CBD shares many neuropathologic abnormalities with FTD and PSP, suggesting the possibility that they represent a spectrum of similar diseases linked by underlying tau pathology. Ballooned neurons throughout the neocortex associated with neuronal loss and astrocytic tau-staining plaques are the diagnostic histologic features of CBD. Unlike Pick disease, the distribution of ballooned neurons is extensive and involves both the primary sensory and motor regions. Pick bodies are absent. Commonly, the superior frontal lobes and the parietal lobes are most heavily affected and are reflected in patterns of neuropsychological testing. Secondary degeneration of the corticospinal tracts occurs. Grossly asymmetric atrophy of the parasagittal superior frontal gyrus and superior parietal lobule is common, with relative sparing of the temporal and occipital regions.
The treatment of CBD is symptomatic as disease-altering therapies are not known. Levodopa is beneficial in the minority of patients. Muscle relaxants and physical therapy with range-of-motion exercises can be useful, with occasional consideration of botulinum toxin therapy for dystonic limbs. Clonazepam may be instituted for myoclonus. There is no theoretical basis for the use of cholinesterase inhibitors in this disease.
PROGRESSIVE SUPRANUCLEAR PALSY
PSP, also referred to as Steele-Richardson-Olszewski syndrome, is a progressive neurodegenerative disease with prominent extrapyramidal motor findings and supranuclear ocular abnormalities. Patients with PSP present to both movement disorder clinics for motor abnormalities and behavioral clinics for cognitive or psychiatric complaints. The frequency of PSP in the general population is around 1 in 100,000, increasing to 7 in 100,000 among people older than 55 years. Diagnosis typically occurs in the sixth to seventh decade of life. There are only a few published reports of familial cases, suggesting that an autosomal-dominant mutation plays only a small role in the disease.
The initial descriptions of PSP emphasized abnormalities in movement with nearly all patients exhibiting early gait abnormalities, and 60% presenting with falls as the first manifestation of disease. Patients often pivot with turns and tend to fall backward. Increased tone (axial more than appendicular) and dysphagia (affecting 46% in the first 5 years) are other common motor findings while bradykinesia is seen in only about a quarter of autopsy-proven cases. Spastic dysarthria results in slurred speech.
Ultimately patients become mute.
Eye movement abnormalities are the hallmark feature of the disease, with vertical supranuclear palsy a critical feature for diagnosis. While vertical gaze palsy can be upward or downward, downward gaze palsy has greater specificity for PSP because mild vertical gaze limitation is normal with aging. The oculocephalic reflex for vertical movement is preserved early in disease despite the vertical gaze palsy. Square wave jerks and both latency and hypometria of eye movements are often seen, typically greater with command (saccadic movements) rather than pursuit. A decreased blink rate and furrowed brow can be noticed when interviewing the patient.
Cognitive or psychiatric features are often present, even in the early stages of illness. The pattern of cognitive dysfunction is characterized by abnormalities localizing to the frontal and temporal lobes and subcortical structures. Thus, slowing of cognitive performance and below expectation performance in verbal fluency and executive functioning tasks are often documented, typically with retained verbal and visual memory performance. Behavioral symptoms of apathy, compulsions, perseveration, and utilization behavior are common. This pattern of neuropsychological and behavioral findings is similar to what is seen in FTD, consistent with the overlapping pathology of the two entities. Of note and differing from FTD, insomnia, depression and anxiety are often seen in PSP.
Treatment of PSP is focused on control of symptoms. Occupational therapy to address speech and visual limitations can be successful.
Prevention of falls is critical but often challenging as impulsivity and lack of insight can limit the effectiveness of therapeutic interventions. Levodopa can be helpful in some patients with PSP, but eventually loses its efficacy.
Survival is between 6 and 10 years at the time of diagnosis.
SPINOCEREBELLAR ATAXIA SYNDROMES
Many of the SCA genetic mutations that have been identified to date are associated with expansion of repeated trinucleotides and are inherited in an autosomal-dominant manner. However, penetrance can vary greatly. The length of the polyglutamine repeat appears to be a major determinant of age for disease onset in an inverse manner. Genetic analyses are estimated to identify only 40% to 60% of familial and less than 25% of sporadic cases. The likelihood that a gene mutation will be identified decreases with older age (> 40). The prevalence of SCAs is between 1 and 4/100,000 with variation by region caused by founder effects: SCA 2 (Cuba), SCA 3 (Azores), and SCA 10 (Mexico). These disorders most commonly present in the third decade of life with some presenting in youth; however, the range of age extends into older ages for many SCAs.
Phenotypic overlap is common in SCA syndromes, but progressive cerebellar disability often presenting early in disease is a common feature. SCA syndromes often affect the noncerebellar portions of the nervous system, particularly as the disease progresses, with neuropathology identified in the brain stem, basal ganglia, and cortex. Such clinical symptoms include oculomotor features (SCA 1, 2, 3), retinopathy (SCA 7),
seizures (SCA 10, 17), peripheral neuropathy (SCA 1, 2, 3, 4, 8, 18, 25), or cognitive and behavioral deterioration (SCA 17, dentatorubral pallidoluysian atrophy).
Other neurodegenerative disorders should be considered in patients with progressive cerebellar dysfunction, including mitochondrial diseases, Huntington disease, leukodystrophies, Frederick ataxia, MSA, prion disease, and the premutation associated with fragile X syndrome. Paraneoplastic processes should be considered in patients presenting with subacute cerebellar dysfunction. Treatment for SCA is generally aimed at symptom management, physical and occupational therapy, and genetic counseling.
SUMMARY OF NEURODEGENERATIVE DISORDERS
Clinical diagnosis of neurodegenerative disorders can be difficult and requires a careful, comprehensive approach and evaluation to make the correct diagnosis. Table 63-5 summarizes the distinctive features of common non-Alzheimer diseases that cause dementia.
TABLE 63-5 ■ NEURODEGENERATIVE DISORDERS
EPILEPSY
Epilepsy in older adults can be caused by a variety of diseases, including ischemic and hemorrhagic strokes, mass lesions, infections, inflammatory etiologies, and neurodegenerative diseases. Cognitive comorbidities are frequently seen in people with epilepsy. There is a complicated interrelationship between cognitive dysfunction due to recurrent aberrant network activities during seizures, underlying degeneration or lesions, or antiepileptic medications themselves.
Acute and remote strokes are common causes of seizures in older adults. When a new seizure develops in the setting of a stroke, the cause is generally hemorrhagic, whereas both hemorrhagic and ischemic strokes can cause chronic seizures. A common causes of hemorrhagic stroke is hypertension;
however, this generally causes hemorrhages in deep brain structures. Thus hypertension is not the most common cause of strokes associated with seizures. Cortical hemorrhages commonly occur in cerebral amyloid angiopathy (CAA). CAA is characterized by amyloid deposition in the small vessels of both the brain and leptomeninges. This is the same abnormal protein that accumulates in AD, with or without deposition of amyloid in the brain parenchyma. CAA predisposes to small microbleeds and larger intracerebral and subarachnoid hemorrhages.
Brain tumors are another common cause of epilepsy in older adults.
Although primary brain tumors can cause seizures, metastatic lesions are the most common type of intracranial tumors that present with epilepsy in older persons. The tumors that most often metastasize to the brain originate from lung, breast, kidney, colon, rectum, and skin (melanoma).
Encephalitis due to underlying autoimmune or paraneoplastic diseases commonly presents with subacute cognitive or behavioral changes and seizures, and requires direct testing for antibodies for diagnosis. In addition to antiepileptic drugs, treatment of encephalitis requires immunosuppressive drugs, as well as identification and treatment of underlying malignancy. A wide variety of infections in the older adults can present with seizures.
Given the higher prevalence of valvular heart diseases and cardiac surgeries, infections can present with septic emboli or even CNS abscesses from endocarditis and present with epilepsy.
Neurodegenerative disorders are another setting in which epilepsy can be a symptom. The rate of epilepsy in AD ranges from 10% to 22% across studies, and the prevalence of epilepsy in this disease is higher than that in other diseases causing dementia. The most common seizure type is complex partial seizures, often presenting with features that are typical of medial temporal lobe seizures, including auras such as deja-vu and olfactory hallucinations, and speech arrest in an unresponsive, awake state. Secondary generalization to tonic-clonic seizures also occurs commonly. Although these events were classically considered a feature of late stage AD, recent reports suggest they can develop when cognitive symptoms are mild or even in the asymptomatic stages of the disease. There are several proposed mechanisms for epileptogenesis in AD, including excessive presynaptic glutamate release as well as impaired GABAergic interneuron activity, especially in the dentate gyrus. Comorbid epilepsy and AD is associated with earlier onset of
dementia and faster rate of decline; however, it is unclear if treatment of seizures can favorably alter the rate of disease progression.
A major factor in choosing epilepsy treatment in older adults is the potential for side effects, including cognitive dysfunction, reduction in bone mineral density and dizziness that may increase risk for falls. Choice of treatment should also include consideration of medical comorbidities, especially impaired hepatic or renal function and cardiac conduction abnormalities. These impairments together with other aging-associated changes in organ physiology may affect serum drug levels due to diminished adipose tissue mass, increased volume of distribution and differential liver or kidney function. In addition, polypharmacy and drug–drug interactions are a common concern in the older population. Given these concerns, ideal drugs should have minimal side effects and reduced chances for drug–drug interactions. Suitable agents for treatment of epilepsy in the older population include but are not limited to lamotrigine, levetiracetam, lacosamide, and oxcarbazepine.
ACUTE TRAUMATIC BRAIN INJURY
Traumatic brain injury (TBI) remains a common cause of neurologic dysfunction at all ages, including the older population. The impact of injury on neurological function depends on several factors, including the severity of injury, frequency of recurrence, and whether the injury is “penetrating” versus “nonpenetrating.” There is converging evidence that repeated head blows that may cause only mild or no acute symptoms appear to increase risk of dementia. This entity, called chronic traumatic encephalopathy, is discussed in Chapter 64 in this textbook.
Penetrating traumatic brain injuries, also termed “open head injuries,” are associated with the highest morbidity and mortality. A penetrating TBI is characterized by disruption of the dura mater, the outermost and thickest meningeal layer protecting and covering the brain. The typical mechanism of injury is a high velocity object such as a bullet or other fragments, as well as skull fractures with subsequent cavitation of bone into the cranial cavity. Due to the mechanism of injury, penetrating injuries are more common in younger people. Penetrating head injuries typically require intensive level care, especially for monitoring intracranial pressure and infections. Importantly, the risk of development of post-traumatic epilepsy is greater than 50% after a penetrating head injury.
Nonpenetrating (or closed) head injury is more typically seen in older people. Falls are the most common cause, with progressive visual and gait dysfunction, deconditioning, and use of sedating medications as important risk factors. Severity of head injury is defined by the Glasgow Coma Scale (GCS), with 13 to 15 being mild injury, 9 to 12 moderate injury, and < 9 defined as severe TBI. Subdural hematoma is a common complication of falls in older adults, and may be an explanation for subacute cognitive changes even after incidents that may have had minimal impact on the head. Notably, both head injury and subdural hematoma are risk factors for further cognitive decline and epilepsy.
Mild TBI, synonymous with “concussion” and defined as head injury with less than 30 minutes of loss of consciousness, less than 24 hours of posttraumatic amnesia and an initial GCS of 13 to 15, is frequently encountered in the older population following falls. Even after recovery from the acute trauma, people with mild TBI often report a constellation of symptoms termed “post-concussive syndrome,” characterized by ongoing headaches, dizziness (especially vertigo), insomnia, or other sleep disturbances. There are also behavioral symptoms such as irritability, anxiety, and depression, and even subtle personality changes. Additionally, impaired memory and cognition, fatigue, and decreased cognitive stamina can occur. The prevalence of postconcussive syndrome following brain injury ranges from 30% to 80%; the severity of injury does not directly correlate with the development of postconcussive syndrome. Treatment is supportive and the symptoms generally resolve over time (sometimes many months); however, they may persist indefinitely.
CONCLUDING REMARKS ON NON-ALZHEIMER DISEASE NEURODEGENERATIVE DISORDERS
Scientific advances over the past decade have remarkably enhanced our understanding of non-AD neurodegenerative disorders. Yet, large knowledge gaps remain. In many cases, neuropathology can now be identified, facilitating the process of linking disease syndromes to pathologic substrate and supporting a deeper understanding of disease pathobiology. In such cases, clinical researchers can begin to accurately describe the clinical presentation of pathology-proven cases to provide clinicians with clues to neuropathology. When combined with emerging noninvasive diagnostic tools,
we can achieve the ultimate goal of promoting pathology-driven treatment approaches, and one day a cure.
FURTHER READING
Aarsland D, Creese B, Politis M, et al. Cognitive decline in Parkinson disease. Nat. Rev Neurol. 2017;13(4):217–231.
Fabbrini G, Fabbrini A, Suppa A. Progressive supranuclear palsy, multiple system atrophy and corticobasal degeneration. In: Reus VI, Lindqvist D, eds. Handbook of Clinical Neurology (3rd series, Vol. 165). Elsevier; 2019:155–177.
Lee SE, Khazenzon AM, Trujillo AJ, et al. Altered network connectivity in frontotemporal dementia with C9orf72 hexanucleotide repeat expansion. Brain. 2014;137(pt 11): 3047–3060.
Manto MU. The wide spectrum of spinocerebellar ataxias (SCAs).
Cerebellum. 2005;4(1):2–6.
McKeith IG, Boeve BF, Dickson DW, et al. Diagnosis and management of dementia with Lewy bodies. Neurology. 2017;89:88–100.
Miller B, Guerra JL. Frontotemporal dementia. In: Reus VI, Lindqvist D, eds. Handbook of Clinical Neurology (3rd series, Vol. 165). Elsevier; 2019:33–35.
Rascovsky K, Hodges JR, Knopman D, et al. Sensitivity of revised diagnostic criteria for the behavioural variant of frontotemporal dementia. Brain. 2011;134:2456–2477.
Vanes MA, Hardman O, Chio A, et al. Amyotrophic lateral sclerosis. Lancet.
2017;390:2084–2098.
Chapter
Traumatic Brain Injury and Chronic Traumatic Encephalopathy
Ann C. McKee, Daniel Kirsch
TRAUMATIC BRAIN INJURY AND DEMENTIA
For decades, traumatic brain injury (TBI) has been considered a risk factor for Alzheimer disease (AD), yet recent large cohort studies indicate that TBIs of all severities (mild, moderate, severe) are a risk factor for dementia, but the neuropathology underlying this risk is largely unknown. Some of the difficulties in understanding the neuropathological consequences of TBI are that TBI is a heterogeneous condition, consisting of different grades of severity: mild, moderate, severe. The grade of severity is based on: the Glasgow Coma Scale; duration of loss of consciousness (LOC); development of posttraumatic amnesia; and frequency (single, multiple), types (focal, diffuse), nature (penetrating, blunt impact, blast), and presence or absence of skull fracture, hemorrhage, contusion, infarction, and/or other secondary processes. In a pooled analysis of 7130 participants from the Religious Orders Study, Memory and Aging Project (ROSMAP) and Adult Changes in Thought (ACT) cohorts, TBI with LOC was not associated with a clinical diagnosis of AD dementia or with AD neuropathological changes at autopsy. These findings were confirmed and extended by a recent study of more than 4000 autopsy participants from the National Alzheimer’s Coordinating Center (NACC) that also found no association between TBI and AD neuropathological change or Alzheimer disease–related dementias (ADRDs) using Consortium to Establish a Registry for Alzheimer’s Disease (CERAD) scores for neuritic plaques and Braak stage for neurofibrillary tangles (NFTs). There was also no association with cortical Lewy bodies (LBs),
hippocampal sclerosis, infarcts, microinfarcts, or amyloid angiopathy. In contrast, exposure to repetitive mild TBI or repetitive head impacts (RHI) such as concussions and subconcussive impacts from contact sport participation, military service, or physical abuse, is associated with chronic traumatic encephalopathy (CTE), a distinctive hyperphosphorylated tau protein (p-tau)-based neurodegeneration. In CTE, the pathological diagnosis is based on its pathognomonic lesion consisting of a perivascular accumulation of p-tau in neurons and neurites in an irregular pattern at the depths of the cortical sulci (Figure 64-1). A supportive, yet nondiagnostic, feature of CTE is the preferential distribution of NFTs in the superficial regions of the cerebral cortex, quite unlike the laminar distribution of NFTs in cortical layers 3 and 5 found in AD. Because the criteria for the diagnosis of CTE were defined only recently and require the use of p-tau immunohistochemistry (IHC) beyond the standard silver staining recommended by CERAD, the prevalence of CTE in neurodegenerative disease brain banks is currently unknown. The few studies that have reexamined brain bank cohorts for CTE using newly defined National Institute of Neurological Disorders and Stroke (NINDS) criteria and p-tau IHC found CTE in 1% to 30% of cases, and some cases previously considered to be only AD have been rediagnosed comorbid AD and CTE.
FIGURE 64-1. Histologic findings in stage II chronic traumatic encephalopathy (CTE). A. Whole mount coronal sections show multiple foci of p-tau pathology primarily located at the depths of the cortical sulci of the frontal and temporal lobes (free floating 50 μ sections, AT8 (p- tau) immunostain). B–H. Neuronal p-tau pathology consists of neurofibrillary tangles and dotlike and threadlike dystrophic neurites and is characteristically found around arterioles (B–F, free floating 50 μ sections, AT8 (p-tau) immunostain; G, H, 10 μ paraffin-embedded sections, AT8 (p-tau) immunostain). (I). Subpial astrocytic tangles (TSAs), which are nondiagnostic but supportive, can be found at the cortical depths (free floating 50 μ sections, AT8 (p-tau) immunostain). Other pathologies include pretangles (J), dystrophic neurites in the white matter (K), and occasional p-tau immunopositive astrocytes (L) (free floating 50 μ sections, AT8 (p- tau) immunostain). There may be marked astrocytosis of the white matter (M, N) (free floating 50 μ sections, glial fibrillary acidic protein immunostain). Hemosiderin-laden macrophages (O)
Learning Objectives
(10 μ paraffin section, Luxol fast blue hematoxylin and eosin stain) and multiple perivascular foci of reactive microglia (P) are found around small vessels in the cerebral white matter (free floating 50 μ sections, LN3 immunostain).
Understand the epidemiology, pathophysiology, clinical manifestations, and adverse outcomes of
traumatic brain injury (TBI) in older adults.
Acquire cutting-edge knowledge about chronic traumatic encephalopathy (CTE) and the risk of developing dementia following repeated traumatic insults to the brain.
Learn about the neuropathology, common clinical symptoms, diagnostic criteria, and progression of CTE over time.
Recognize the various strategies and treatments used to address cognitive deficits associated with TBI and CTE.
Key Clinical Points
Chronic traumatic encephalopathy (CTE) is diagnosed neuropathologically and is a primary tauopathy with a distinct well-defined pattern of tau pathology.
The clinical presentations of CTE are nonspecific and can be grouped into behavioral, cognitive, dementia, and motor. Currently, there are no neuroimaging, laboratory, or cognitive tests that can definitively diagnose the disease during life.
The classic neuropathology of CTE includes a neurofibrillary tangle with “dot-like” neurites around a small arteriole in the brain.
In CTE, pathology becomes progressively worse over time with worsening of clinical presentations that are generally nonspecific.
CTE is associated with exposure to repetitive head impacts and pathology is increased with duration of exposure.
Over time, the CTE pathology progresses and involves other parts of the brain, including the hippocampus, entorhinal cortex, amygdala, brainstem, and cerebellum.
There have been remarkably few detailed case studies of the neuropathology of remote moderate–severe TBI, although there are many speculative reviews. Several small neuropathologically focused studies support a relationship between moderate–severe TBI and atypical AD
neuropathological changes characterized by an unconventional distribution of p-tau and amyloid-beta (Aβ) pathology. Johnson et al. examined the brains of 39 individuals diagnosed with a single moderate–severe TBI after 1 to 47 years’ survival and found that Aβ plaques were greater in density in the TBI group compared to controls. In addition, in subjects 60 years or younger at the time of death, NFTs were more frequent in the TBI group compared to age-matched controls and were distributed more commonly in the superficial layers of the cortex with clustering of NFTs among the depths of the sulci, an NFT distribution unlike AD and suggestive of CTE. Similarly, in a study of chronic severe TBI survivors without dementia, Scott et al. found increased Aβ accumulation by 11C-Pittsburgh compound positron emission tomography (PET) imaging in nine TBI subjects after severe TBI compared to age- matched controls and in a pattern distinct from the Aβ accumulation seen in AD patients. Clinical, neuroimaging, and neuropathological characteristics of two subjects who developed early onset dementia after sustaining a single moderate-severe TBI many years earlier have been described. One subject also had a history of RHI from military combat. Our study demonstrated the diversity and complexity of the neuropathology after remote TBI. In both cases, the pathologic findings included severe cerebral atrophy (brain weight
< 930 g), white matter degeneration (which was particularly severe in the posterior corpus callosum), atypical AD, atypical CTE (with an unusual distribution of NFTs in the superficial layers of the cortex, but without the diagnostic pathognomonic perivascular lesion of CTE), widespread diffuse and sparse neuritic Aβ plaques, cerebral amyloid angiopathy (CAA), neuronal loss, and astrocytosis. Unusual α-synuclein and TAR DNA-binding protein 43 (TDP-43) proteinopathies were also found in one case. The size and distribution of the LBs and TDP-43 pathology were not characteristic of typical Lewy body disease (LBD) or frontotemporal lobar degeneration (FTLD). In a third recently reported case of a long-term survivor of moderate–severe TBI, there were widespread NFTs, α-synuclein positive LBs, diffuse Aβ plaques, and CAA. Other pathological changes reported in long-term survivors of TBI are axonal loss and disruption, reduced cortical thickness, blood-brain barrier dysintegrity, white matter degeneration, and chronic inflammation.
Axonal injury is one of the most common neuropathologies after TBI across all severities of injury. Hay et al. reported 47% of long-term individuals who experienced a single moderate–severe TBI who survived a
year or more after injury had evidence of chronic blood-brain barrier disruption with multifocal abnormal fibrinogen and immunoglobulin G immunostaining in the cortex compared to limited localized immunostaining in controls. In this same group of long-term TBI survivors, Johnson et al. reported increased persistent neuroinflammation with significantly increased microglial density and reactive morphology, and reduced corpus callosum thickness compared to controls. Findings in animal models of TBIs have also demonstrated axonal loss, myelin loss, white matter degeneration, astrocytosis, neuronal loss, and chronic inflammation.
These studies suggest that TBI is a heterogeneous entity that is influenced by the biomechanics of the acute traumatic event, secondary consequences of the acute injury, and chronic, poorly understood pathophysiological processes including deposition of multiple neurodegenerative disease proteins (p-tau, Aβ, TDP-43, α-synuclein), cerebral atrophy, white matter degeneration, axonal loss, blood-brain barrier disruption, and persistent neuroinflammation. Other factors that may exert substantial influence over the long-term outcome of TBI include the frequency and severity of the TBI (mild, moderate, or severe) as well as the underlying characteristics of the individual who experienced the TBI, including genetic factors, age, gender, cardiovascular health, and cognitive reserve, among others. Clearly, there is a fundamental need for rigorous clinicopathological case evaluations to determine the full spectrum of clinical features and neuropathological alterations that occur after remote TBI, much needed data that will advance the field and highlight pathways for successful intervention and treatment.
TRAUMATIC BRAIN INJURY AND PARKINSON DISEASE
Multiple studies have implicated any lifetime history of TBI as a risk factor for Parkinson disease (PD). Whether TBI sustained in older adulthood increases short-term risk of PD can be obscured by recall bias or reverse- causation. However, Gardner et al. found that among middle-aged and older patients, there is a 44% increased risk of being diagnosed with PD in those who experienced a TBI compared to those with nonbrain-related traumatic injury. Furthermore, the risk was significantly higher with more severe or more frequent TBI, providing additional support for a causal association.
The Crane et al. study of ROSMAP and ACT participants also showed that
TBI with LOC < 1 hour predicted increased risk for cortical LBs, and TBI with LOC > 1 hour predicted increased risk for cerebral microinfarcts. In addition, in the ACT cohort, TBI with LOC > 1 hour was associated with clinical diagnosis of PD. Contact sport athletes are also at increased risk of developing PD and parkinsonism. Adams et al. recently showed that the number of years an individual was exposed to RHI through contact sports was associated with the development of neocortical LBD, and LBD, in turn, was associated with parkinsonism and dementia. Furthermore, in the Framingham Heart Study (FHS) community cohort, years of contact sports play were associated with neocortical LBD (OR = 1.30 per year, p = 0.012), and in a pooled analysis, a threshold of more than 8 years of play best predicted neocortical LBD (ROC analysis, OR = 6.24, 95% CI = 1.5–25, p = 0.011), adjusting for age, sex, and APOE ε4 allele status.
CHRONIC TRAUMATIC ENCEPHALOPATHY
CTE is a neurodegenerative tauopathy associated with repetitive mild head trauma, including concussion and asymptomatic subconcussive impacts. CTE has been identified in American football, ice hockey, soccer, baseball, and rugby players, professional wrestlers, a bull rider, military veterans exposed to blast, and victims of assault and domestic violence. In 2009, using 50 μm whole mount landscape slides and p-tau IHC, the distinctive regional pathology of CTE was described in two former boxers and one former National Football League (NFL) player. The findings were compared to the 48 cases of neuropathologically confirmed CTE previously reported. The pathology of CTE was distinctive from other tauopathies in that it was irregular, patchy, and perivascular, with a tendency to be most severe at the depths of the sulci in the frontal and temporal cortex (Figure 64-2). In addition, the p-tau neurites were dot-like, unlike the neuropil threads of AD, and there were p-tau immunoreactive astrocytes in the subpial and periventricular regions.
FIGURE 64-2. Patterns of p-tau isoforms 3R and 4R in chronic traumatic encephalopathy (CTE) in frontal and temporal lobes. A, D. 3R p-tau immunostaining shows scattered immunopositive neurons in the middle frontal cortex. B, E. 4R p-tau immunostaining shows many immunopositive neurons and astrocytic tangles in the subpial region of the middle frontal cortex and at the depth of the sulcus. C, F. AT8 (p-tau) immunostaining shows 3R and 4R p-tau immunopositive neurons and astrocytic tangles in the middle frontal cortex. G, J, M. 3R p-tau immunostaining shows scattered immunopositive neurons in CA1 (G), CA2 (J), and CA4 of the hippocampus (M). H, K, N. 4R p-tau immunostaining shows many immunopositive neurons in CA1 (H), CA2 (K), and CA4 hippocampus (N). I, L, O. AT8 (p-tau) immunostaining shows 3R and 4R p-tau immunopositive neurons in CA1 (I), CA2 (L), and CA4 hippocampus (O). All 10 μ paraffin-embedded sections, magnification bars 50 μm.
Clinical Symptoms and Diagnosis
The symptoms of CTE vary and generally include cognitive, behavioral, and motor symptoms that progress with the disease. The initial symptoms may include headache, loss of concentration, mood swings, short-term memory loss, and depression. With disease progression, these symptoms worsen and become associated with impairments in decision-making and judgment, explosive behavior, aggression, paranoia, parkinsonism, and gait abnormalities. Additional symptoms may include visuospatial abnormalities, verbal and physical violence, and suicidal ideation and attempts. In general, behavioral and mood symptoms are more common in younger patients, with motor and cognitive symptoms dominating in older adults.
Definitive diagnosis of CTE can only be made on autopsy; thus, patient evaluation should focus on characterizing the clinical phenotype and excluding other diseases that could account for the presenting symptoms.
Clinical evaluation should focus on symptoms and history of exposure to contact sports, repeated head trauma, concussions, military employment with traumatic or blast injuries, and behavioral, cognitive, mood, or motor symptoms. Neurological examination should look for signs of LBD, muscle fasciculations, evidence of parkinsonism, language abnormalities, and motor neuron disease. Each patient should undergo neuropsychological testing to identify deficits in memory, attention, language, and visuospatial function.
Routine labs and an MRI brain scan as well as amyloid and tau PET scans could be considered to confirm tau deposition in the brain. Alternatively, cerebrospinal fluid could be collected to assay amyloid, tau, and other analytes. Given the diagnosis of CTE can only be made on neuropathological examination of the brain, the remainder of this chapter is focused on the pathology of CTE.
Neuropathology of CTE
Preliminary criteria for the neuropathological diagnosis of CTE were presented in 2013 as part of a clinicopathological case series of 68 male subjects with CTE, ranging in age from 17 to 98 years (mean 59.5 years) and 18 age- and gender-matched controls without a history of brain trauma. The neuropathological criteria proposed by McKee et al. for CTE required the presence of focal epicenters of p-tau immunoreactive NFTs and abnormal neurites around a small vessel, distributed at the depths of the sulci in the cerebral cortex, NFTs distributed in the superficial layers of cortex, and p- tau immunoreactive astrocytes in the sub-pial layer at the depths of the sulcus
most often found in the frontal and temporal cortices (Figure 64-1). Other frequent pathologies in CTE included axonal loss in the subcortical white matter and the cooccurrence of TDP-43 immunoreactive inclusions and neurites.
In addition, McKee and colleagues proposed a staging scheme for characterizing the progressive p-tau pathology in CTE (McKee CTE Staging Classification Scheme) (Table 64-1). The method of staging CTE p-tau pathology was based on the previous work of Braak and Braak in AD, who examined a series of 83 autopsy brains and found a characteristic distribution pattern of NFTs and neuropil threads (NTs) that permitted the differentiation of six pathological stages of AD. The Braak staging system for NFT forms the basis for the neuropathological diagnosis of AD used by the National Institute on Aging, and similar staging schemes are now available for Aβ plaques in AD and LBs in PD. Based on the 68 CTE cases, McKee and colleagues identified four pathological stages of CTE (I–IV), primarily using large hemispheric slides immunostained for p-tau (Figure 64-3). In the earliest stage of CTE, stage I, there are one or two isolated epicenters of NFTs and dot-like neurites (ie, “CTE lesions”) arranged around small blood vessels at the depths of the sulci in the frontal, temporal, or parietal cortices. The small blood vessels at the center of the CTE lesions are usually small arterioles and may be associated with p-tau immunopositive thorn-shaped astrocytes (TSAs) in the subpial region. In stage II CTE, three or more CTE lesions are found in multiple cortical regions, the CTE lesions are larger, superficial NFTs are found along the sulcal wall and at gyral crests of the adjacent cortices, and there is more neurofibrillary pathology in the locus coeruleus and nucleus basalis of Meynert (Figure 64-1). In stage III CTE, confluent perivascular patches of p-tau mmunoreactive NFTs and dotlike- and threadlike-neurites are found at the sulcal depths, as well as NFTs in the superficial cortical laminae. Diffusely distributed NFTs are also found in medial temporal lobe structures, including the hippocampus, entorhinal cortex, perirhinal cortex, amygdala, as well as additional brainstem structures. Neurofibrillary degeneration in stage III CTE involves CA4 and CA2, as well as CA1 of the hippocampus (Figure 64-4). In CTE stage IV, CTE lesions and NFTs are densely distributed throughout the cerebral cortex, diencephalon, brain stem, cerebellar dentate nucleus, and spinal cord with neuronal loss and gliosis in the frontal and temporal cortices and astrocytic
p-tau pathology (Figure 64-5). CTE pathology in stages I and II is considered to be mild and is considered to be severe in stages III and IV.
TABLE 64-1 ■ CTE NEUROPATHOLOGICAL DIAGNOSTIC CRITERIA
FIGURE 64-3. Coronal sections demonstrating stages of chronic traumatic encephalopathy (CTE). In stage I (top row) CTE, p-tau pathology is found in discrete foci in the cerebral cortex, most commonly in the superior or lateral frontal cortices, typically around small vessels at the depths of sulci. In stage II CTE (second row), there are multiple foci of p-tau at the depths of the cerebral sulci and localized spread of neurofibrillary pathology from these epicenters to the superficial layers of adjacent cortex. The medial temporal lobe is spared neurofibrillary p-tau
pathology. In stage III CTE (third row), p-tau pathology is widespread; the frontal, insular, temporal, and parietal cortices show widespread neurofibrillary degeneration with greatest severity in the frontal and temporal lobes, and concentrated at the depths of the sulci. Also in stage III CTE, the amygdala, hippocampus, and entorhinal cortex show substantial neurofibrillary pathology that is not found in earlier stages. In stage IV CTE (fourth row), there is widespread and severe p-tau pathology affecting most regions of the cerebral cortex and the medial temporal lobe, sparing calcarine cortex in all but the most severe cases. All images, CP-13 (p- tau) immunostained 50 μ whole mount tissue sections.
FIGURE 64-4. Histological findings in stage III chronic traumatic encephalopathy (CTE). (A) Whole mount coronal sections in stage III CTE that show multiple cortical foci of p-tau pathology throughout the frontal and temporal cortices. The cortical epicenters and depths of the sulci often consist of confluent masses of neurofibrillary tangles (NFT) and astrocytic tangles (ATs). (B–D) The p-tau pathology consists of NFTs, ATs, and dotlike and threadlike dystrophic neurites clustered around the penetrating cortical vessels, likely an arteriole. (E) Cortex adjacent to the cortical p-tau foci shows scattered NFTs. (F) The hippocampus shows dense neurofibrillary pathology. Subpial ATs (G) and p-tau immunopositive astrocytes may be prominent (H). There may be dystrophic neurites in the white matter (I). SMI-34 immunostaining shows reduction in axonal staining and numerous large, irregular axonal varicosities (J). (A–I. free floating 50 μ sections, AT8 (p-tau) immunostain; J. SMI-34 immunostain, 10-μ paraffin section.)
FIGURE 64-5. Histologic findings in stage IV chronic traumatic encephalopathy (CTE). Top: Whole mount coronal sections in immunostained for CP-13 (p-tau) show widespread p-tau pathology affecting most regions of the cerebral cortex and medial temporal lobe with characteristic concentration at the depths of the cortical sulci. Middle: In stage IV CTE, immunostainging for CP-13 (p-tau) shows prominent astrocytic tangles and marked neuronal loss in the cortex, amygdala, and hippocampus, in addition to dense neurofibrillary pathology.
Bottom: Immunostaining for p-TDP-43 shows widespread pTDP-43 abnormalities in stage IV CTE. All images: 50 μ tissue sections.
McKee and colleagues found a significant correlation between the stage of CTE pathology and duration of football career, supporting a dose- response relationship between cumulative head trauma exposure and CTE severity. They also found a significant association between CTE stage, number of years after retirement from football, and age at death, data supporting progression of p-tau pathological severity over time. By contrast, number of concussions, years of education, lifetime steroid use, and position played did not significantly relate to CTE stage.
Using the McKee criteria in 2015, the NINDS and National Institute of Biomedical Imaging and Bioengineering (NIBIB) funded a consensus meeting of expert neuropathologists to evaluate 25 cases of various tauopathies
blinded to all clinical, demographic, and gross neuropathological information. The tauopathies included CTE, AD, progressive supranuclear palsy (PSP), argyrophilic grain disease (AGD), corticobasal degeneration (CBD), primary age-related tauopathy (PART), and parkinsonism dementia complex of Guam (G-PDC). All cases were of moderate to severe pathological severity and without comorbid diseases. Unknown to the neuropathologists before their analysis, the cases included 10 cases of suspected CTE that were part of the NINDS-funded Understanding Neurological Injury and Traumatic Encephalopathy (UNITE) or Veterans Affairs—Boston University—Concussion Legacy Foundation (VA-BU-CLF) brain bank at Boston University School of Medicine (BUSM), including seven cases of CTE with Aβ plaques and three cases without Aβ plaques.
Five cases of AD with Braak stages V–VI, two cases of PSP, and two cases of CBD were also selected from the Alzheimer’s Disease Center (ADC) brain bank at BUSM. Two cases of GPDC and two cases of AGD were selected from the Alzheimer’s Disease Research Center (ADRC) brain bank at Mayo Clinic-Jacksonville, and two cases of PART were selected from the ADRC brain bank at Columbia University. A single laboratory processed all the cases uniformly and the resulting slides were scanned into digital images that were provided to neuropathologists.
There was good agreement between the evaluating neuropathologists regarding the overall neuropathological diagnosis of all 25 cases (Cohen’s kappa, 0.67), and even better agreement regarding the specific diagnosis of CTE (Cohen’s kappa, 0.78), using the proposed McKee criteria. Of the 10 cases submitted with the presumptive diagnosis of CTE, 64 of the 70 reviewer’s responses (91.4%) indicated CTE as the diagnosis. There was a significant decrease of errors that paralleled the sequence of cases evaluated. The log of the expected errors significantly decreased by 0.43 for each case of CTE reviewed (p = 0.024). There were common additions to the CTE diagnosis, including “changes of Alzheimer’s disease” (ADC) and AD in the cases with Aβ plaques. Other comorbid diagnoses included hippocampal sclerosis (HS), AGD, and PART. Of the 15 other tauopathy cases submitted for review with diagnoses other than CTE, the reviewers generally agreed with the submission diagnoses of AD (97.1% of responses), CBD (92.8%), and PART (78.5%); however, there were frequent discrepancies in cases with presumptive diagnoses of PSP, AGD, and GPDC.
The NINDS consensus group defined the single pathognomonic criterion for CTE as “an accumulation of abnormal p-tau in neurons, astrocytes, and cell processes around small vessels in an irregular pattern at the depths of the cortical sulci.” They also concluded that the criteria reliably distinguished CTE from other tauopathies and made refinements of the original McKee criteria primarily in distinguishing CTE from age-related tau astrogliopathy (ARTAG).
The consensus panel defined supportive criteria for CTE as follows: “(1) abnormal p-tau-immunoreactive pretangles and NFTs preferentially affecting superficial layers (layers II–III); (2) in the hippocampus, pretangles, NFTs or extracellular tangles preferentially affecting CA2 and pre-tangles and prominent proximal dendritic swellings in CA4; (3) abnormal p-tau- immunoreactive neuronal and astrocytic aggregates in subcortical nuclei, including the mammillary bodies and other hypothalamic nuclei, amygdala, nucleus accumbens, thalamus, midbrain tegmentum, and isodendritic core (nucleus basalis of Meynert, raphe nuclei, substantia nigra, and locus coeruleus); (4) p-tau-immunoreactive thorny astrocytes at the glial limitans most commonly found in the sub-pial and periventricular regions; and (5) p- tau-immunoreactive large grain-like and dot-like structures (in addition to some threadlike neurites).”
The panel noted that the diagnosis was often aided by inspection of the slide at low power inspection that revealed the distinctively irregular spatial pattern of p-tau in CTE. In addition, they observed that the pattern of superficial cortical involvement and hippocampal degeneration was unlike AD, the p-tau neurites were often dot-like, and that the TDP-43- immunoreactive inclusions in CTE were distinctive. They also made recommendations for the diagnosis and practical evaluation process of potential CTE cases.
The preliminary NINDS criteria for the pathological diagnosis of CTE were subsequently validated by Bieniek and colleagues in a study of participants from the Mayo Clinic Brain Bank in which the brains of 66 contact sport athletes and 198 controls without a history of brain trauma or contact sports were reanalyzed for p-tau pathology. Using the NINDS criteria, the authors found CTE in 21 of the 66 contact sport athletes (32%) and none of the 198 controls. Multiple international groups have used the NINDS criteria to evaluate and publish the neuropathological findings of
CTE in various cohorts, including soccer players, American football players, a bull-rider, and rugby players.
In 2016, the consensus panel met again to validate and refine the preliminary pathological criteria for CTE, provided by the first NINDS consensus conference, using a second blinded sample of 27 cases of tauopathies (including 17 CTE cases representing all severities of disease) and blinded to clinical and demographic information to (1) develop the minimum threshold for diagnosis and (2) to determine whether the McKee CTE Staging Classification Scheme was reliable using a limited number of paraffin-embedded slides. Generalized estimating equation analyses showed a statistically significant association between the raters and CTE diagnosis for both the blinded (OR = 72.11, 95% CI = 19.5–267.0) and unblinded rounds (OR = 256.91, 95% CI = 63.6–1558.6).
In addition, the group discussed the minimum threshold for the diagnosis of CTE and the pathological features critical to a strict definition of “pathognomonic CTE lesion.” The group endorsed a single pathognomonic lesion in the cortex as the minimum threshold for CTE. The group also clarified that the pathognomonic lesion of CTE is a neuronal lesion characterized by NFTs and disordered neurites that may or may not contain p-tau immunoreactive astrocytes. The following features of the pathognomonic lesion were considered necessary: p-tau aggregates in neurons, with or without concomitant p-tau-immunoreactive TSAs, at the depth of the sulcus around small blood vessels, in deeper cortical layers not restricted to subpial and superficial regions. The panel unanimously confirmed that based on case material available, purely astrocytic perivascular p-tau lesions (including subpial ARTAG) did not meet criteria for CTE. Furthermore, clusters of p-tau immunoreactive astrocytes in the white matter of the frontal and temporal cortex, basal ganglia, or lateral or medial brain stem were considered consistent with ARTAG and not specific features of CTE.
Recognizing that the McKee criteria for staging CTE were based on a comprehensive panel of representative paraffin-embedded slides from multiple brain regions as well as hemispheric whole mount tissue sections, the panel suggested a simplified scheme to assess CTE severity using a restricted number of slides. The panel also proposed a working protocol for the diagnosis of CTE and assessment of CTE severity as “low CTE” or “high
CTE,” corresponding to CTE stages I–II and CTE stages III–IV, respectively (Table 64-1, Figure 64-3).
CTE Is a Primary Tauopathy
CTE is a primary tauopathy, and early stage CTE is characterized pathologically by a distinctive accumulation of p-tau in neurons as NFTs and dot-shaped NTs around small arterioles in the absence of any Aβ pathology. The arteriole at the center of the CTE lesion is often thickened and surrounded by a wide perivascular space containing occasional hemosiderin- laden macrophages. Subpial TSAs may be found in association with the CTE lesion, but in isolation, are not diagnostic for CTE. Moreover, Aβ deposition is not a feature of early CTE, but diffuse Aβ plaques commonly develop in CTE with aging and may occur in parallel with CTE p-tau pathology.
Astrocytes in CTE and Distinction from ARTAG
Age-related tau astrogliopathy (ARTAG) is a neuropathological entity often found as a comorbidity in older individuals. There are several publications in which ARTAG pathology is misclassified as mild CTE. These reports are accompanied by mistaken claims that CTE pathology is often found in normal controls. In contrast, a comprehensive study of p-tau pathology in 310 older Europeans showed no cases of CTE despite ARTAG presence in 38%.
Progression in CTE
P-tau pathology becomes progressively more severe in most subjects with CTE with age; this progressive pathology forms the basis of the McKee CTE Staging Classification Scheme. Some individuals, however, show low levels of CTE pathology at an advanced age, supporting an indolent disease course. Nonetheless, for the majority of former American football players whose brains were donated to the UNITE brain bank, the McKee staging scheme correlates with duration of football career, number of years after retirement from football, and age at death, findings that support a progression of p-tau pathology severity with survival over time. Neuroinflammation, p-tau IHC staining density, and white matter rarefaction also increase in severity in association with the CTE stage and correlate with dementia. Among 177 American football players with CTE, semiquantitative assessments of p-tau pathology showed a graded increase from stage I, where p-tau pathology was highest in frontal cortex and locus coeruleus; to stage II, where the p-tau
pathology increased and was highest in the frontal, temporal parietal, septal, insular and entorhinal cortices, amygdala, thalamus, substantia innominata, substantia nigra, and locus coeruleus; to stage III, where there were further increases in p-tau pathology across all previous regions and hippocampus; to stage IV with still larger increases across all regions (Figure 64-3). The McKee CTE Staging Classification Scheme also correlates with quantitative regional assessments of p-tau pathology and dementia status. Other factors, such as the development of comorbidities with advancing age, may also drive progression of clinical symptoms in CTE in addition to increasing p- tau pathology.
SUMMARY
TBI is risk factor for dementia and parkinsonism, although the precise neuropathological underpinnings of these clinical conditions are unclear. TBI is a heterogeneous condition consisting of different grades of severity (mild, moderate, severe), frequency (single, multiple), types (focal, diffuse), nature (penetrating, blunt impact, blast), and presence or absence of skull fracture, hemorrhage, contusion, infarction, and/or other secondary processes.
Neuropathological alterations after TBI include changes of AD, LBD, motor neuron disease, accumulation of p-tau or TDP-43, axonal disruption and loss, cerebral atrophy, blood-brain barrier dysintegrity, white matter degeneration, and chronic inflammation. Repetitive mild TBI or RHI, such as those typically experienced during contact sport participation, combat military service, and physical abuse, have been associated with CTE, a distinctive p- tau–based disorder. Histologically, CTE pathology is defined by irregular, patchy, and perivascular accumulation of p-tau NFTs and dotlike neurites, with a tendency to be most severe at the depths of the sulci in the frontal and temporal cortices. CTE is a progressive tauopathy that expands over time to involve deep brain structures such as the hippocampus, entorhinal cortex, amygdala, brainstem, and cerebellum. The pathological severity of CTE is divided into four (I–VI) stages using the McKee CTE Staging Classification Scheme that have been shown to correspond with quantitative regional assessments of p-tau pathology and significantly correlates with age at death and duration of football career. Recently, a NINDS consensus conference proposed an abbreviated scheme for classifying the pathology into low or high based on a limited number of paraffin-embedded blocks. At the current
time, CTE can only be diagnosed by postmortem neuropathological evaluation; thus, there is an urgent need to develop diagnostic biomarkers to advance diagnosis during life and to evaluate potential therapeutic targets.
FURTHER READING
Barnes DE, Byers AL, Gardner RC, Seal KH, Boscardin WJ, Yaffe K. Association of mild traumatic brain injury with and without loss of consciousness with dementia in US military veterans. JAMA Neurol. 2018;75(9): 1055–1061.
Bieniek KF, Cairns NJ, Crary JF, et al. The Second NINDS/NIBIB Consensus Meeting to Define Neuropathological Criteria for the Diagnosis of Chronic Traumatic Encephalopathy. J Neuropathol Exp Neurol. 2021;80(3): 210–219.
Crane PK, Gibbons LE, Dams-O’Connor K, et al. Association of traumatic brain injury with late-life neurodegenerative conditions and neuropathologic findings. JAMA Neurol. 2016;73(9):1062–1069.
Dams-O’Connor K, Guetta G, Hahn-Ketter AE, Fedor A. Traumatic brain injury as a risk factor for Alzheimer’s disease: current knowledge and future directions. Neurodegener Dis Manag. 2016;6(5):417–429.
Doherty C, O’Keeffe E, Keaney J, et al. Neuropolypathology as a result of severe traumatic brain injury? Clin Neuropathol. 2018;38:14–22.
Fann JR, Ribe AR, Pedersen HS, et al. Long-term risk of dementia among people with traumatic brain injury in Denmark: a population-based observational cohort study. Lancet Psychiatry. 2018;5(5):424–431.
Goldstein LE, Fisher AM, Tagge CA, et al. Chronic traumatic encephalopathy in blast-exposed military veterans and a blast neurotrauma mouse model. Sci Transl Med. 2012;4(134):134ra60– 134ra60.
Jafari S, Etminan M, Aminzadeh F, Samii A. Head injury and risk of Parkinson disease: a systematic review and meta-analysis. Mov Disord. 2013;28(9): 1222–1229.
Johnson VE, Stewart W, Arena JD, Smith DH. Traumatic brain injury as a trigger of neurodegeneration. Neurodegener Dis. 2017;15:383–400.
Kenney K, Iacono D, Edlow BL, et al. Dementia after moderate-severe traumatic brain injury: coexistence of multiple proteinopathies. J
Neuropathol Exp Neurol. 2018;77(1):50–63.
Lee Y-K, Hou S-W, Lee C-C, Hsu C-Y, Huang Y-S, Su Y-C. Increased risk of dementia in patients with mild traumatic brain injury: a nationwide cohort study. PloS One. 2013;8(5):e62422.
Ling H, Morris HR, Neal JW, et al. Mixed pathologies including chronic traumatic encephalopathy account for dementia in retired association football (soccer) players. Acta Neuropathol. 2017;133(3):337–352.
Marras C, Hincapié CA, Kristman VL, et al. Systematic review of the risk of Parkinson’s disease after mild traumatic brain injury: results of the international collaboration on mild traumatic brain injury prognosis. Arch Phys Med Rehabil. 2014;95(3, Supplement): S238–S244.
McKee AC, Cairns NJ, Dickson DW, et al. The first NINDS/NIBIB consensus meeting to define neuropathological criteria for the diagnosis of chronic traumatic encephalopathy. Acta Neuropathol. 2016;131:75– 86.
McKee AC, Daneshvar DH, Alvarez VE, Stein TD. The neuropathology of sport. Acta Neuropathol. 2014;127(1): 29–51.
McKee AC, Stein TD, Nowinski CJ, et al. The spectrum of disease in chronic traumatic encephalopathy. Brain. 2013;136(1):43–64.
Mez J, Daneshvar DH, Abdolmohammadi B, et al. Duration of American football play and chronic traumatic encephalopathy. Ann Neurol.
2020;87(1):116–131.
Mez J, Daneshvar DH, Kiernan PT, et al. Clinicopathological evaluation of chronic traumatic encephalopathy in players of American football. JAMA. 2017;318(4): 360–370.
Nordström A, Nordström P. Traumatic brain injury and the risk of dementia diagnosis: a nationwide cohort study. PLoS Med. 2018;15(1):e1002496.
Ojo JO, Mouzon B, Algamal M, et al. Chronic repetitive mild traumatic brain injury results in reduced cerebral blood flow, axonal injury, gliosis, and increased T-tau and tau oligomers. J Neuropathol Exp Neurol.
2016;75(7):636–655.
Plassman BL, Grafman J. Traumatic brain injury and late-life dementia.
Handb Clin Neurol. 2015;128:711–722.
Stewart W, McNamara PH, Lawlor B, Hutchinson S, Farrell M. Chronic traumatic encephalopathy: a potential late and under recognized consequence of rugby union? QJM Int J Med. 2016;109(1):11–15.
Wilson L, Stewart W, Dams-O’Connor K, et al. The chronic and evolving neurological consequences of traumatic brain injury. Lancet Neurol. 2017;16(10): 813–825.
Yaffe K, Lwi SJ, Hoang TD, et al. Military-related risk factors in female veterans and risk of dementia. Neurology. 2019;92(3):e205–e211.
Chapter
Major Depression
Whitney L. Carlson, William Bryson, Stephen Thielke
EPIDEMIOLOGY AND PATHOPHYSIOLOGY
Distinctive Features of Geriatric Depression
Various theories have tried to explain generally why people become depressed at different ages, but little fundamental understanding has developed around the causes of depression. Instead of sifting through theories and research, we summarize several generally accepted principles, which are usually taught as part of medical or psychology training, but which may be forgotten during a productive career. These, and the findings of research, form the groundwork for the recommendations we make about assessment and management.
Depression is not just part of ge tting older Although challenging life events, such as health problems and the loss of family and friends, may occur more often in later life, and although the prospect of one’s own death may seem distressing, such cumulative negative events have not been shown to cause depression. In fact, aging generally involves resilience. Multiple studies demonstrate that around 5% of older adults meet criteria for major depression, which is lower than in other age groups. The likelihood of new-onset depression peaks in middle age. Among older adults, it appears that greater age involves a similar likelihood of becoming depressed, but a higher likelihood of remaining depressed. Therefore, it would be misguided and unproductive to normalize depression in older patients. Younger clinicians might learn from their older patients about how to navigate through the later stages of life.
Older age brings new challenges It is unlikely that any two people have ever experienced aging in exactly the same way, but certain generalizations hold.
For our purposes, “older” means after the middle stage of life. Erik Erikson’s theory of life stages conceives that mid-life (roughly age 40–65) involves a tension between generativity and stagnation. If individuals can continue to make progress in work, society, and family matters, they will feel comfortable, but if they lack direction or become stymied, they will experience distress. If navigated successfully, this stage builds the virtue which Erikson defined as “care.” In later life (after age 65), the challenges shift to ego integrity versus despair: can individuals accept their lives in full, and conclude that they made some contribution to the world? Or do they feel guilty about their actions, and believe that everything was meaningless? If ego integrity is sustained through later life, people develop the virtue of wisdom. Although there is no necessary association with major depression and this theory, clinicians can remain attuned to this change in life tension and goals.
Learning Objectives
Identify and characterize major depression; and understand how to recognize it within the context of common biological, psychological, social, and developmental challenges that older adults experience.
Understand and select evidence-based treatment options for major depression in older adults, in light of their risks and limitations, and accounting for availability and patient preferences.
Identify and be ready to apply practical strategies to support and develop therapeutic alliance with depressed older adults.
Key Clinical Points
Treating older adults with depression can challenge clinicians emotionally, and it is important to consider one’s reactions to the experience. Clinicians interacting with depressed older adults can convey empathy and genuine human concern for their well-being.
Although aging is associated with new life challenges, depression is not a normal part of getting older. Depression should be identified and treated when present.
Appreciate the importance of assessing suicide risk, and develop expertise in assessing suicidality in clinical encounters.
There is significant overlap between depression, social disconnection, and inability to meet basic human needs. The presence of anhedonia (lack of interest or pleasure) or hopelessness can indicate the presence of depression.
There are several evidence-based treatments for major depression in older adults, including medications, psychotherapy, and electroconvulsive therapy (ECT). Availability and patient preference are more important considerations than effectiveness.
In most cases, it is reasonable to consider tapering off an antidepressant after symptoms resolve or after an adequate trial produces no clear benefit. Exceptions would be a history of multiple depressive episodes or known decompensation with prior attempted dose taper.
All depressed older patients should be assessed for suicide risk. At minimum, this should entail evaluation of suicidal thoughts, intent to act on suicidal thoughts, suicidal plans, and access to lethal means.
Depression is not just subjective distress Depression in later life has serious consequences for overall health and quality of life. It is associated with disability, cognitive impairment, perception of poorer health and quality of life, social disconnection, greater difficulty managing medical comorbidities and chronic pain, increased healthcare utilization, greater mortality, and higher costs. The relationship between depression and adverse health outcomes is likely bidirectional, with depression worsening health and function, and impaired health and function deepening depression in return.
Consequences of the relationship between depression and overall health and function include (a) the emergence of close connections between depression, social disconnection, and motivation to meet one’s basic human needs, and
(b) the potential for a downward spiral as depression and poor health exacerbate one another. We expand upon these consequences in the following section.
Interactions Between Geriatric Depression and Aspects of Health and Social Well-Being
Geriatric depression and basic human needs The relationship between depression and ability to meet one’s basic needs is complex. Abraham Maslow’s theory of human motivation, from which his hierarchy of needs derives, highlights clinical insights and potential pitfalls at the interface between depression and basic needs. Starting with basic physiological needs (eg, food, shelter, critical medicine) as the most fundamental source of motivation, the hierarchy progresses to basic physical and emotional safety, love and belonging, esteem, and, finally, self-actualization. Two core features of geriatric depression that distinguish it from other late-life mood disturbances are anhedonia (lack of interest or pleasure) and hopelessness. These, especially in combination, erode motivation to meet one’s basic needs. It can be hard to identify depression as the underlying problem when other aspects of health and well-being are wanting, especially when deficits in basic needs are often more apparent than depressed mood. In these situations, anhedonia and hopelessness may be important clues to the presence of depression.
Life circumstances resulting in unmet basic needs can also precipitate depression in older adults. Figure 65-1 illustrates this relationship.
Deprivations in the higher levels of basic needs, self-actualization, and esteem tend to manifest as feelings of dissatisfaction and inadequacy, respectively, but not full-blown depression. Clinicians eager to diagnose and treat depression might overdiagnose it in these scenarios, by misinterpreting minor mood symptoms as a pathological condition. Depression tends to emerge when love and belonging needs are unmet. Abundant empirical evidence and theoretical frameworks, such as interpersonal theory, point to the critical importance of meaningful social connections in sustaining mood among older adults who experience changes in social roles (eg, retirement, grandparenthood), grief and loss, and, sometimes, functional and mobility limitations.
FIGURE 65-1. Depression erodes motivation to meet basic human needs (left), resulting in unmet needs and their associated consequences (right).
Although not inevitable or considered to be a part of normal aging, vulnerability to social disconnection can increase in these and other common scenarios, and such social deprivations are tightly linked to depression.
Deficits in the most fundamental needs, including safety and basic physiological needs, are usually more urgent than love and belonging deficits and the depression that may accompany them. There is another potential pitfall here, as some clinicians will focus on treating depression as a more tractable issue than addressing safety and physiological needs (like housing and food insecurity), hoping that the patient will be able to find solutions to these more fundamental needs once depression is treated.
Social disconnection, loneliness, and depressed mood in older adults The link between social disconnection and depression among older adults has become increasingly recognized as a general finding, a source of clinical concern, and an opportunity for intervention. Broadly, social disconnection is linked to physiologic and behavioral changes, such as chronic inflammation, changes in oxytocin and monoamine signaling, hypothalamic-pituitary- adrenal axis reactivity, unhealthy behaviors (eg, substance use, poor sleep hygiene, sedentary lifestyle), and challenges in coping and decision-making. The structural aspects of social connection include objective measures of network size (the number of contacts one has), frequency and duration of contacts, and living situation. Such connections sustain an individual’s emotional well-being, and deprivation in structural connection is often termed social isolation. The term loneliness designates the perception that the function and/or quality of one’s social connections are lacking. These distinct but closely related concepts offer another perspective from which to view geriatric depression, its health consequences, and treatment options.
Social isolation can develop in several common late-life scenarios, including retirement, widowhood, distance from adult children, sensory and mobility impairments, chronic medical comorbidities, and selective pruning of social contacts to the most meaningful few. Some forms of social isolation, such as living alone and selectivity, may reflect capability, independence, and preference, which are associated with positive health outcomes, so care must be taken not to assume that all social isolation signifies a problem.
Loneliness, however, is unlikely to be an adaptive coping mechanism.
Someone can be surrounded by family and friends and still feel lonely if the quality of those relationships is poor, or those people are offering kinds of support that the person does not want, but loneliness typically occurs in the context of some degree of social isolation. Loneliness is fundamentally related to the experience of depression, as the perceived lack of love and belonging overlap with depressed mood, negative self-appraisal, and feelings of worthlessness and being a burden. The opposite direction of causation, in which depression leads to loneliness, is also common, as depression inhibits motivation and ability to sustain relationships and feel good about them. The mutual reinforcement of social disconnection and depression can plunge older adults into a downward spiral that exacerbates morbidity and mortality. Treatment planning should consider the social factors that may either mitigate or compound it.
Practically addressing social disconnection in geriatric care Psychosocial treatments have been developed to address the social aspects of depression, including interpersonal psychotherapy and behavioral activation. These are effective treatments for geriatric depression, but they require expertise that is not always available in geriatric medicine settings. An alternative, simpler model that was developed to support mental wellness at the population level and in clinical settings is Act-Belong-Commit. In basic terms, “Act” refers to doing something, which could include physical activity, reading, or hobbies. “Belong” refers to doing something with other people that fosters a sense of inclusion and community, like game nights or book clubs. “Commit” refers to doing something meaningful, like value-driven volunteer work. Act- Belong-Commit is straightforward and systematic enough to use in a busy geriatric medicine or primary care clinic, while also targeting depression through behavioral activation, love and belonging needs through social activity, and esteem and self-actualization needs through value-driven engagement. Prescriptions for specific kinds of desired social activity can
help with credibility and accountability. While not a replacement for more traditional depression treatments, and not extensively studied in older adults, Act-Belong-Commit is a useful framework that geriatricians can incorporate into daily clinical practice.
Clinical symptoms and diagnosis of major depression As a clinical phenomenon, depression deserves careful definition. Although people use the word “depressed” and “depression” in general speech to describe a low mood or feeling of distress, major depression involves specific hallmark findings. For the assessment of major depression in older adults, it is clinically useful and valid to use a brief, self-report rating scale such as the Patient Health Questionnaire-9 (PHQ-9) (Figure 65–2). The PHQ-9 is based on the Diagnostic and Statistical Manual (DSM)-5 of the American Psychiatric Association criteria for major depression. Note that symptom frequency is an important part of the assessment. Depression is thus quite different than having a bad day, or dealing with challenging circumstances. Scores of 0 to 4 indicate either no depression or clinically insignificant depressive symptoms. Scores of 5 to 9 indicate mild depression, generally subsyndromal, while scores of 10 or higher typically correlate with a diagnosis of clinical (“major”) depression and warrant further evaluation and clinical intervention. The PHQ-9 is a good tool for measurement-based care to document the presence of symptoms and their change during treatment. Such instruments are also useful for educating patients and pinpointing which symptoms represent targets for intervention. Item 9 on the PHQ-9 asks the patient about death wishes or more intensive suicidal ideation; this is clearly an important part of the assessment. There is a broad range of risk and protective actors for suicide in older adults (Figure 65–3).
1ill. וr�uו!(hו:l'זd םn sr.,י קrntje,ms, hC/iY OJJJ;cw Mvו!!נ�llוB ו!יJ!'וpזםbi!I 1!M11ו!ו 11161� � 00 'J001 wark. tob,וו= od llוlווjקש i,- <ב•t � aוong wי1tווct /וer י;י�
P,atient Healdiי Ottestioווחaire-9י(IPiHQ-9)
IOי#: םaוe; [01[01ן ן ן ן ן
Tוmsp,o:iוו • ן_
ו;וlיו! lfll!
1\1b!ז1וJ.an l1all .ut,owrןי
llוe d8)'$ day
LiסlQ iיניiוונO-5"\ 1;1r P'l)i1$1JJ-g iזו (l;ןirl!) $hiוig\
1)
Sew111I
dil'j$
יt
t
3
1.ורg�wןו,,� .��
Q�•
1)
1
וו
.t
�
ו
3. Trסי.1blla l'a gor�� g E1sloop. or ,i;IQ,spiוg t;,o
cfוןןן.j
0
1
2
3
1
ו
ו
3..�UוWt ar sp,Nll<iחg so !IIO'M1.( 1Jו81 ol /וer�
OO!J1d lmיllnבlit:!1!d. 0ז llו e,:,ppo.� 119 SD
l)יlי:tgo םr r;n.Oe11-.1 יyoo1ם'\יe boeו!II IOOM!rוg
srouoo 1! � t mc11111 't!uinuשוl�
1)
זי
:2:
3 ו
9..'Tboogbhltןat yםu woold b8 beoor Qilde.ad., or יOi
ונחrti1ו9 y,:1Ur:!iBll ilו -
�
0
1
2
:נ
ו
ada
+
.+
+
---��זiט,l}וl���.,,.�(
.!.��
cq )=nזוm�
זo1 AL:
OWיi 1118 /,111!2 woolr8, חtnlחofllו /וa'.-e yaו 1Jee11 ooltוווred םv11ny וeו 111& ןןgl0'ו'ו1bו erוןf'/'וJl!l (uaa·.r 111�וoo yoor an.swer)
4.f g1irtd �r iוglופvו i1111 11ו:g;1וa | 0 | • | t | , | 3 |
� f'iנQr ,p 'iםן:וe1נ 01� lr!g | 0 | 11 | :2 | .!' | |
6. F'eatrוgIJad aboנLJI yoonie,if-<וrtha1 you.a·rea ' llווm or haי.יe kiו yoors or )'O!Jr tsiM;י dowדו | 0 | י 2 | וו :נ | ||
7. Twuble וו1000<נ'3r.שןg 00l hlתgs, suי:h l!IS ra,;a� ו11י3 n� r Cוr liתg-יc televillioo | 0 | 1 :2 | s | ||
PHG-9·ln:siroc-tions.mr U.e1e
Se0jז,1ng .םf E!!;I0,9Dtp�89i:9iםn_A!i!llesו)iזlרeמ,t
Ca1oofa.וe 1he Toוal Ntנnוber al Symptסוns: Add th.e- שtal numoor cנf symptםms in aoo��1-8 iווdl'םa.11ed by וhe p.aוlents as l'iiaviח{I expeוiwוood "more tlדan haוf 111e days" or early even;י day.:- Add an ad"di1kוnal poirlil il t:he patierוt aMwered que8�סr1i 9 [8ulolda1ity Oue:!ilianן "sev-eml ciays, "hזוoזe lhaוו half וheo days" or "ne.arlי' eve.ry day."
TcוT I Numb r ot Symןב:toms;
Ca'lcufala 1he Toוal {Severrty) Score: Mu:ו�ןכl� וhe assign,ed pairזt vatues by ·rhi! nו.mוber or m.:.קo.nses gMin by l.he paדiani ln eac;tו lreq(וency oolו,וmn.Add e8e bg.elhe-r oo- Geloulate tne- otal Soofie, whkh represents lhe�13!,'erity of lhe sy�rM:
SGwm.l d.�ז;- | -- | א 1 =י | ;ו | |
MM"eו וharו half ו!וe daיןiז.• | x :2 - | !ב | ||
"N"ear"ly avery day" -- >1 3 .. --C זoוaנ Score ta וכ ·cך: | ||||
� !9 osUc oo�es�ment
SymןבilפmG | 1Pr-ם11ir.i.onו:וl :םiו:וgוזם:r;ls | Tiי,aatmont R. וכamiזliומ•dMIGfll. |
ו 10 4 �-ympwm� (from gray awas ori quar;liol'IS 1-9) ק.!11$ /LJת�oo.mil mוpairrחeווl (י1ונן | Otlוer | -Fwa�:.tנrarוce an�r פ'י'pponil/Q c,ינ�rו�נiו1וg -Edu�lt(Jli kכ call il de<וeווior.וle$ |
2 ויכ 4 �mptum� (ir<כrח 9ra.y <U(l;;וi; ס[) que:.וion 1-9,)p, lu�q,טeii�on ו o-.r2, i:,lus furוc1ior1a1 n"iJ»llrmen.:t (י �oJ | Mוnסr d�p[!;lttior1 | ·Walclוl1.1I wailiזוg -Su�rtiveו couווs-e5ןן_g -11no- improvemeיrזt aJter one or mcre m�tlls �e וm dep:זזוs.s..m1 or bri�ps)dloוha-r.iן)';י |
3 וo 4 S)'mp,tQrחs {fmm 9�y �<1$ Ql"I que:.וioןןS: י-9). plus. qUSיS on, «12{IOOOCI) "Mlh �tזןpl.Qms Q.f :2 ye.a�, ptus rnווcסor)!J irזlfנו;;ווrr'ת'וeוך{ jjןO) | (;hrQnic: depMRiGn (prob:s.b!y) d�1hyrז'וia | -AmidepזeManl i!!"ח: or psyclרoclןeraP')I |
5 :;ymptom� tlזom gr.,,y iiroa� Qח q11�וio.r'l!i ו-9)1pנu� Uיe!llו(,ljl:!1, 1 Qr 2, plu!!I: ז.נו:ו�::lfonס.1imPi1irm�n1t (lt10) | Major di.,pnו-�$ on (�1;11,ii riוy $Wr6<) Mo�mוttי (PHQ,g =נm o.f 15 וo 19ן Scv«c (PH0-9 $0Dm q-f 20) | Fי.or וno-dשי-מm -Peגienו pN.llliiוrenסe זוכr rוli�r-�un.1elJdkוז ן:ו<;y$ןolher.גקן F'יoi' �vete
psyclוallןerS.f)'f |
hk!וtioחi/iוl IIE!i!lllilllnl •1:;arו:sideraוianו:;Ref1;1rral וכt CQm.,n�merוt iuiln m1;1nt.;!J he.il!h ipeciall)' pr(l\l'ider ft;ir pa,1ien!� wגl�;
@ H� :sז.נicida-rf�
0 Bipo\Rr disoo:fer
@ !r11;1.dequ1.11e trז!Bl1J1E!חt r1;1:ןpסru.e
0 Comiכlex P6vchosooi:al ne,eds, andfor
181 01�r .i;.,tiw וזרe11ן d$0rdll-J"J
FIGURE 65-2. Patient Health Questionnaire-9 (PHQ-9). (PHQ-9 is adapted from PRIME MD TODAY, developed by Drs. Robert L. Spitzer. Janet B.W. Williams, Kurt Kroenke, and colleagues, with an educational grant from Pfizer Inc. For research information. Contact Dr Spitzer at ris8@columbia edu. Use of the PHQ-9 may only be made in accordance with the Terms of Use available at http://www.pfizer.com, Copyright @1999 Pfizer Inc. All rights reserved. PRIME MD TODAY is a trademark of Pfizer Inc.)
Although subsyndromal depression seems to have about the same effects on health and quality of life as major depression, antidepressant medications do not produce benefit in the treatment of subsyndromal depression. This creates a problem for clinicians. It is important to address patients’ concerns, and the message, “You do not merit treatment” can seem discouraging or deflating. Instead of refusing to provide a treatment, we recommend using— and explaining—nonpharmacologic approaches. Subsyndromal depression might merit a “watch and wait” approach. And, as discussed below, major depression does not necessarily require treatment with an antidepressant medication, because other options exist, and patients prefer them to medications.
TREATMENT
Efficacy of Treatments in the Real World
Observational studies about real-world depression care paint a rather bleak picture. About 5% of older adults in primary care settings have clinically significant depression. Of these, about half are recognized and diagnosed. Of that group, about one in five receives guideline-concordant care. Therefore about 10% of depressed older adults are receiving what would be considered “adequate.”
Unfortunately, such “adequate”—or guideline-concordant—care does not guarantee positive outcomes. Clinical research has identified a variety of treatments that show statistically better outcomes than either no treatment or placebo, but such findings do not translate well into real-world clinical practice. First, the clinical studies generally recruit a restricted group of patients who do not represent the real world, either because they do not have other problems or because researchers can recruit them easily. Second, the process of engaging in research entails additional attention, such as frequent check-ins, which does not happen in typical care. The “active” treatment arm receives more than just the treatment, as does the “control” arm. Third, most
research studies last for weeks or months, and may produce short-lived outcomes. Finally, the benefits of treatment found in most clinical trials are relatively modest, often in the range of five or ten percentage points higher likelihood of a positive outcome.
The challenge of improving depression care in populations becomes evident in follow-up of collaborative care treatments for major depression. Collaborative care has, after extensive research and implementation, become recognized as the most effective structured approach for treating geriatric depression across primary care settings. Based on a large retrospective analysis of various clinic systems, patients in collaborative care had substantially greater likelihood of improvements in depression than those in usual care. Nevertheless, even in this “gold standard” approach, less than half of patients experienced a depression response (a ≥ 50% reduction in a symptom scale) at 6 months, and less than a third had remission of depression. Two-thirds of the patients in collaborative care thus remained effectively depressed. This “real-world” rate is considerably lower than in clinical trials of collaborative care. Tremendous heterogeneity in response was common: some clinics had less than 10% rates of depression response in patients who engaged in collaborative care, while others had over 80%.
This perspective may seem pessimistic, but it can be seen another way.
By acknowledging that no single treatment will yield substantially better outcomes, clinicians can select from a variety of available options, and can engage with patients to identify approaches that will best suit patient preferences and resources. Despite the apparently low rates of response just mentioned, over time depression does and can improve. The challenge lies in finding ways to hasten response.
Selection of Best Treatment for the Patient
In busy clinical settings, medications have become the mainstay for treating depression. Most primary care providers have learned about depression and its management, and comfortably write prescriptions for antidepressant medications. Yet several other reasonable options exist, and the clinician will benefit from having considered them, both theoretically and with each patient. We propose several critical steps to increase the success of this process:
Elicit and discuss patient preferences for depression treatment.
Recognize that antidepressant medications involve risks, in particular
falls, syndrome of inappropriate secretion of antidiuretic hormone, and mortality.
Address misconceptions about treatment options.
After selecting a treatment, ensure that it is (within bounds of safety) given a full trial.
Keep checking on symptoms during treatment.
If a treatment does not work, try another.
The core message for patients is, “We have a variety of treatments which
might work for you. None is a guarantee, but let’s try one that is safe and fits best with what you want. If that doesn’t work, we’ll try another.”
The options that clinicians may want to consider are not all necessarily evidence-based, but some of them may resonate well with patients, and this gives a promising sign. Some modalities may not be available, especially in rural or underserved communities. The discussion below considers each option, and touches on some common preconceptions. Medications are treated separately.
Watch and wait Patients may recognize depression, but not want to take action. Especially if symptoms remain mild, it may be reasonable to continue to check in about them. This may seem antithetical to the mission of medicine, but it should be tempered by the discussion above about the effectiveness of treatments, and also by the recognition of low adherence. If the clinician has any concerns about safety or self-harm, this is obviously not a first choice.
Specialist referral In complex cases, such as the combination of depression with cognitive impairment or other forms of psychiatric illness, referral to a geriatric mental health specialist may provide the most benefit for the patient. Nevertheless, such management, even when available, does not provide any guarantees.
Ele ctroconvulsive the rapy (ECT) ECT has suffered from negative press and stereotypes. It remains by far the most effective treatment for depression, with response rates over 80%, and remission rates over 60%, in most studies. No trial has ever found medications to be more effective than ECT. Older patients seem to be even more responsive to ECT than younger adults. In its current form, ECT is a safe treatment with few side effects. Short-term memory loss almost always resolves within a few weeks after the course of
treatment. The average initial course involves three to six treatments per week for 2 to 4 weeks.
Unfortunately, the beneficial effects tend to decay over time, even with medications, and “maintenance” ECT is sometimes required. ECT should be considered as an early option for patients with severe or psychotic depression, and for those who have had limited medication response.
Collaborative care As discussed above, collaborative care has the best record of success of any system-level intervention for geriatric depression. It involves coordination between the patient, a designated care manager (who checks in with the patient, develops plans, measures symptoms, and carries out brief interventions), the primary care provider (who prescribes medications), and a consulting psychiatrist (who monitors outcomes and communicates with the care manager). An information system tracks symptoms and treatments. Given the positive effects for patients and providers, systems that offer collaborative care can more efficiently and effectively manage geriatric depression. Yet also as discussed above, this approach does not guarantee remission of depression, and not all clinic systems have successfully applied the model.
Individual or group psychothe rapy Various types of psychotherapy, such as cognitive behavioral therapy and interpersonal therapy, can produce lasting improvements in mood, and development of coping skills and insights. Group approaches, including reminiscence therapy, have been shown to yield significant effects. In the largest treatment trial of geriatric depression to date, 51% of participants preferred counseling or psychotherapy, and 38% preferred antidepressant medications. Research studies of psychotherapy have used structured, short-term interventions, lasting roughly 3 months at most. The treatments involve clear goals and processes, and they do not need to commit to “being analyzed.” If no improvements have occurred after about 1 or 2 months, that particular psychotherapy modality may not be an effective strategy. Limited availability remains a key challenge.
Brief assessment of life stressors Depression, as a clinical syndrome, involves more than just a stressful situation or two. But sometimes patients can become so burdened by aspects of their lives that their ability to cope with or work through problems suffers. This may be especially true for individuals taking care of others, especially a spouse with dementia. The clinician can provide referrals for social support services, or respite care for the spouse
of the patient. Sometimes caregivers require permission from an individual with expertise, such as a health care provider, that it is okay to take care of themselves instead of always providing care to others.
Physical activity Numerous studies have assessed the association between exercise and depression. Although it does not appear that exercise serves as a treatment for depression, people who exercised more often had lower rates of depression onset or recurrence. Because physical activity has other benefits, clinicians can sincerely recommend this as an approach to sustain mental and physical health.
Effective Use of Medications to Treat Depression
Antidepressant medications will likely remain the mainstay of treatment in most settings. Although research has identified some subtle differences between antidepressant classes and specific drugs, no single treatment has clear superiority. Because numerous sources, including electronic interaction checkers, address side effects, we will not discuss them here. Instead, we present several findings that clinicians may not recognize or remember.
Chapter 60 contains detail about specific antidepressants and other psychotropics.
The placebo effect is powerful In quite a few studies, antidepressant medications have offered no statistically significant benefit over placebo, and response rates to placebo are often quite large. This appears especially so in less severe cases, and some meta-analyses have suggested that the actual effects of antidepressant medications (above and beyond the effects of placebo) occur only in more severe cases of depression. In all cases, the clinician can sincerely tell the patient that believing the medication will help increase the likelihood that it does help. Appreciating this point is especially important because most antidepressants take 6 to 8 weeks to have a measurable effect, and if a patient expects to feel better within a week or two, they may question whether the drug has any value.
Antide pressants treat major, not minor, depression Although antidepressant medications have a number of other indications, such as anxiety disorders, they do not show evidence of benefit for minor or subsyndromal depression. It may seem logically that “a little bit of antidepressant” will treat “a little depression,” but this does not hold up under scrutiny. Starting an antidepressant for a condition that may resolve spontaneously runs the risk of
assuming that the antidepressant produced the effect, thus committing the patient to continue the medication, or to face the problem of stopping it.
Antide pressants do not generally work in dementia Based on a large number of negative studies, there is general consensus that the usefulness of antidepressants in dementia is questionable. This creates a problem for patients with dementia, who are also unlikely to be able to engage in psychotherapy. Often other behavioral interventions, in particular related to caregiver support, can enhance mood. Insofar as polypharmacy is a significant problem in dementia care, it does not make sense to continue an antidepressant unless it yields a measurable benefit.
Slow up-titration may have benefits Older adults who were started on a lower dose of antidepressant, and then titrated up over multiple visits, had much better medication adherence than those who were started on a higher initial dose, even though they reached the same eventual dose. This is somewhat surprising, because one might expect that the sooner a target dosage is reached, the sooner a response would happen. More likely, the patient receives more personalized attention during dosage increases, and fewer side effects occur with titration.
The re is no standard dosage for antide pressant effect Research about the dose- response curve for antidepressants does not support that there is a minimum effective dosage for each medication, or a narrow therapeutic window. For most antidepressants, there is only a small relationship between the dosage and the depression response, and the effect of zero dose (placebo) is substantial across all medications. Therefore, the best dosage is not a number fixed in stone, but rather the dosage that works for that individual patient.
Safety, side effects, and interactions will dictate the maximum dosage.
Patie nts may not tell you about side effects Unless specifically asked, patients may be reluctant to report side effects, especially involving sexual dysfunction. They may choose instead simply to stop the medication. By presenting and normalizing the common side effects, including related to sexual issues, and by inquiring routinely about side effects of antidepressants, the clinician will enable patients to share concerns. If side effects are concerning enough, an antidepressant from another class could be tried, which is a better outcome than the patient deciding never to take an antidepressant again.
In an effort to improve adherence, patient satisfaction, and outcomes, astute clinicians will achieve better responses by ensuring (1) that the
condition is amenable to antidepressant treatment; (2) capitalizing on the placebo effect by increasing positive expectations; (3) planning to titrate up slowly rather than starting at a full dosage; (4) not assuming that each drug has a set therapeutic window that applies across patients; and (5) encouraging discussion of side effects. We propose the flowchart (see Figure 65-4) as a simple way to guide this decision.
FIGURE 65-4. Antidepressant dosing flowchart.
Optimal Duration of Antidepressant Treatment
DSM-5 defines major depression as an “episode” for good reason: the symptoms typically will improve over time, even without treatment.
Persistent depressive disorder (dysthymia) involves long-term, but less intense, symptoms, and is less common. This introduces a problem about ongoing medication use. If the depression remits, should the patient keep taking the medication to protect against future episodes? And if the depression does not improve, should the medication be kept simply because there is still a problem? Either case entails a sort of logical trap, which requires reasoning about what the condition would have been like, in an alternative world, if the antidepressant had not been started. This situation is impossible to resolve.
It is difficult to say how many older adults are prescribed antidepressants “unnecessarily,” but it is likely a great many. In some populations, such as nursing home residents, almost half of individuals are prescribed an antidepressant, far higher than any estimates of depression prevalence. It is likewise difficult to quantify the harms and benefits that attend such a high level of prescription, except that polypharmacy in general, and some medications in particular, are consistently associated with negative consequences such as falls and hospitalizations. That especially becomes a problem if the medication is in fact providing no benefit.
Another important challenge is that cessation of many antidepressants, especially SSRIs and SNRIs, involves withdrawal effects. The typical effects include sweating, chills, dizziness, flu-like symptoms, GI upset, insomnia, vivid and excessive dreaming, depressed mood, vertigo, numbness, and shock-like sensations. These experiences can produce distress, and a sense that the mental health condition is worsening, or, in other words, a confusion of withdrawal with lack of efficacy. As a review by Shelton succinctly points out, “It is especially important to recognize the symptoms of discontinuation and to distinguish them from relapse and recurrence because a misdiagnosis may lead to unnecessary tests, useless treatments, and increased costs.”
It is easy to be swayed by concerns about “doing no harm” into continuing medications that might provide no clear benefit, because (one reasons) they might also be preventing the return of a condition like depression. The safest defensive position would seem to involve a strong offense. Yet that metaphor does not apply well in this setting, and the potential benefits of treatment always come at a potential cost.
When studied scientifically, the risks of stopping antidepressant medications are surprisingly low. There have been few studies focusing on older adults, but, as Gueorguieva and colleagues indicate, across age groups, “the existence of similar relapse trajectories on active medication and on placebo suggests that there is no specific relapse signature associated with antidepressant discontinuation. Furthermore, continued treatment offers only modest protection against relapse.” In other words, if a patient does well after an episode of depression, it may have nothing to do with the continued use of an antidepressant, and stopping the medication may be no different than continuing it.
Psychiatric research has shown that depression often relapses, even after complete remission, and that the likelihood of additional relapses increases with each additional episode. If someone has had a single episode of depression, she or he would have little need for a perpetual “maintenance” medication, but if the patient had had six or 10 such episodes, and did not relapse while taking a medication, she or he would most likely benefit from long-term use. The proof of the effect of medications ultimately lies in repeated experience, not in hypothesizing about potential effects. Also, medications do not provide a guarantee against relapse, and side effects may develop or worsen to a medication that was prescribed for years. All of this argues for thoughtful consideration, at every visit, of whether the antidepressant is providing a material benefit.
In this background, the only settings where antidepressants might reasonably be continued indefinitely and without reexamination are when depression has returned in relation to stopping the medication but not simply during withdrawal from it, or when multiple episodes of depression have occurred in the past (which is effectively the same situation, because presumably the medication was not in place at the time of the relapses). In other cases, it makes sense deliberately and slowly to consider discontinuing the medication, while monitoring symptoms. The time frames are not exact, but most SSRIs, SNRIs, and mirtazapine can be tapered over 2 to 4 weeks, and tricyclics require several months. Fluoxetine, which has a long half-life, does not require tapering, and can be used to smooth out the last stage of withdrawal (see below).
We propose this rough framework around stopping of antidepressants, which clinicians can tailor to their own practices:
Consider the end game when starting a medication. How long would you plan to use it, if there was a positive or a negative response?
At each appointment, reassess whether the antidepressant has benefit.
Allow time to taper off the medication. This will increase the likelihood that the patient does not experience distress, which will in turn increase the willingness to try medications again in the future if needed.
Continue to assess symptoms during withdrawal, and question whether symptoms can be attributed to withdrawal, lack of efficacy, or natural fluctuation.
If withdrawal symptoms happen even with a long taper, consider giving a single dose of fluoxetine 20 mg, which has a long half-life and tapers itself.
Practically, it is easier to start than to stop medications, and drug refills usually take but an instant. It may be expedient not to scrutinize the patient’s medication regimen at each visit, but it ultimately undermines the goals of doing no harm and helping patients choose treatments that will work best for them.
APPROACH TO THE PATIENT
Challenges in Treating Older Adults With Depression
A common saying in geriatrics is that one of the best ways to care for the patient is to care for the caregiver. This applies to health care providers as well as to family, informal, and paid caregivers. Treating older adults who are experiencing symptoms of depression can challenge a clinician’s patience and empathy. Even mental health providers who make it their life work often feel distress or frustration when interacting with people who have low mood, little energy, thoughts of death, and the other hallmark findings of depression. As is clear from the discussion above, patients with depression are almost never the “life of the party.” We encourage providers to recognize and reflect on the feelings, both positive and negative, that develop when working with people faced with these problems. Table 65-1 lists some of the challenges.
TABLE 65-1 ■ CHALLENGES WITH TREATING OLDER ADULTS WHO HAVE DEPRESSION
If clinicians start to feel especially negative emotions toward patients, or feel that they are emotionally exhausted by them, they might consider talking with a colleague, or taking a minute to reflect and recharge after an appointment that deals with mental health. It is especially important to remember that, despite our best efforts, we cannot “fix” people’s mental health problems for them. We can provide support, presence, and specific types of help, but ultimately individuals heal themselves (just as a wound
heals itself, even if assisted by interventions). It can be frustrating to get the sense that one is doing nothing, but simply being emotionally present and listening to another person’s distress is often enough.
Acknowledging Patients’ Distress
Patients may express or demonstrate feeling sad, depressed, lonely, or ill, and this may or may not involve a clinical diagnosis of major depression. Some of the distress that older adults feel may relate to feeling a sense of disconnect from their medical providers and other people in their immediate circle. As the frequency of visits with primary care providers and specialty providers increases with new and chronic medical illnesses, the quality of these interactions will be important opportunities to help patients feel seen and heard. Activities as basic as sitting down to talk to a patient, briefly touching them on the arm, and making regular eye contact instead of looking at notes, the computer, or phone can mean the difference between a meaningful interaction and one that does not result in a patient feeling understood. Asking a patient “Is there anything else I should know?” or “What other things are going on in your life?” also helps build genuine interactions and give a deeper context for a patient’s presenting concerns. In other words, simple human presence is probably the most important way to ensure that older patients can express themselves.
How to Support Patients Between Appointments
It is difficult to ascertain what patients want. As Franz Kafka noted in a story, “To write prescriptions is easy, but to come to an understanding with people is hard.” Some people might want a prescription, although research suggests that many do not. It is likely that what most people are truly seeking is genuine empathy and understanding and someone to help them navigate their circumstances and suffering. Therefore, regular contact with patients for check-ins when they are experiencing life stresses can make a significant difference. This is likely one of the reasons behind the relative success of collaborative care models. They have shown that the “person who cares” does not need to be a mental health professional. Staff in a busy primary care clinic such as nurses or care coordinators can play a role in checking in more informally with patients who need support in between visits. Helping patients identify helpful resources for connection and practical supports can also go a long way to helping to alleviate anxiety, frustration, and the
overwhelm that older adults often experience trying to navigate our fast- paced and complex society.
Management of Suicidal Ideation
Suicidal ideation is sometimes the way a patient communicates the level of their suffering. It also may be a genuine wish to die. Suicidal thoughts are very difficult to understand. Some theories suggest that suicidality is mainly a problem-solving approach, in which the best course of action is not to be alive. Others have proposed that the suicidal condition represents a “negative quality of life,” in which someone would take action to increase the quality of life to zero, that is, death. These theories may help a clinician to keep a balanced perspective, and to focus on effective problem-solving. In many instances, suicidal thoughts may help patients ensure they will receive support when they are feeling hopeless or overwhelmed with trying to navigate grief, social problems, or adjustments to the loss of function when they feel their needs are not being met or understood rather than a true desire to end their life.
Obviously, suicidal statements and expressions of hopelessness or a desire to die should always be taken seriously. Supporting a patient with such thoughts can come in many forms, often addressed by helping the patient have a way of talking about their suffering but also having readily at hand ways to connect them with mental health specialty and community support if that is needed. But there is no “one-size-fits-all” approach to addressing suicidality.
Several factors are associated with increased risk of suicide (see Figure 65-3), including male gender, social isolation, and substance abuse.
Uncontrolled pain is one significant factor in expressions of suicidal ideation and a risk factor for completed suicide, particularly in older men. Those with more social connections and supports are often able to navigate difficult circumstances, particularly if they have been successful in attaining resolution of problems at other developmental life stages. However, the quality of their social supports is the most important factor in assessing whether others can be helpful when a patient expresses suicidal ideation.
FIGURE 65-3. Suicide risk checklist. MADRS, Montgomery-Asberg Depression Rating Scale; QIDS-SR, Quick Inventory of Depressive Symptomatology-Self Scale.
It is most important to assess whether a patient expressing suicidal ideation has started to develop a plan or has one already formulated. If patients indicate that they have started rehearsing or gotten close to acting on
a specified plan, this is an indication that the risk is especially high. Finding out if the patient has the means at hand, especially if their plan involves a firearm is especially important in these circumstances as removing the firearm or other means can decrease immediate risk. This may involve contacting someone in the patient’s life about the situation, both to alert them of the concern as well as to assist in decreasing access to the means, especially where firearms are involved. Even when an expressed plan does not include the use of firearms, asking a patient if they own a gun is an important safety check.
Some patients may need to be considered for hospitalization, especially if they express that their plan is imminent and means are readily available. Hospitalization can provide a safe environment to better understand the circumstances that are contributing to the patient’s feelings of hopelessness and assist in identifying supports and connections to community and mental health resources. Referral to a mental health professional for ongoing care can be an important step to help with ongoing assessment and care but may not be immediately available except in an emergency room setting. Patients with suicidal ideation who are expressing symptoms of psychosis are also at particular risk, particularly if their symptoms include command auditory hallucinations to harm themselves or others.
Quality of Life and Depression in Old Age
Older adults are often concerned about the quality of their life as they age, especially as they experience sensory deficits, loss of physical functioning, and pain. Goals of care should be discussed in depth and at frequent intervals. One of the benefits of the emergence of palliative care medicine is the identification of ways for such discussions to happen alongside primary care and specialty providers. One does not need to specialize in palliative care, however, in order to identify ways to alleviate suffering. Conversations about what a person sees as quality of life can often lead away from pursuing aggressive and potentially unsuccessful treatment. Alleviating suffering, including relieving pain, supporting practical needs, and identifying and providing meaningful emotional and spiritual supports are the cornerstones of palliative care medicine.
Palliative care should not be confused with end-of-life care or hospice as it applies to a much broader range of patients. Identifying ways to avoid unnecessary or unwanted treatments by discussing advanced care directives
with patients is one important way of giving patients control over end of life situations. Primary care and specialty providers will also face situations where the family of a patient may not agree with choices their family member is expressing. Providing a place for difficult conversations to occur and supporting patient choices, options, and agency are some of the most important things providers can do to make patients feel supported.
Management of Depression in Long-Term Care Settings
Considering the setting of where there is a concern for a patient who might have depression also offers valuable opportunities for education of staff about ways to tailor a patient-centered care plan that can lead to alleviating patient and family concerns. Since antidepressants are commonly prescribed in nursing homes which then require consideration of gradual dose reduction, it is sometimes much easier to go about improving supports for patients in distress by identifying meaningful activities, social interactions and community supports as well as discussing issues in their environment and care delivery that are contributing to a less than satisfactory quality of life which often leads to feelings of helplessness and loss of control. Offering choices, managing environmental noise, reminding staff of important social interactions outside of care delivery, and identifying things a patient most values for comfort and peace are good places to start such conversations with the larger care network, including identifying caregiver stress that may be impacting those caring for patients either at home or in a long-term care setting. Reminiscence therapy has shown good benefits for improving depressive symptoms in long-term care settings.
Indications for Specialty Care Referrals
Not all patients with depression necessarily need to see a mental health specialist, though this is often part of the treatment plan when therapy is involved. If a patient with major depression is being treated and not improving, if their condition is worsening, or if ECT or complex augmentation strategies are being considered, this may be a time to consider a psychiatry consultation or referral. There are very few geriatric psychiatry specialists relative to the number of older adults needing care. Such subspecialty referral is not necessarily needed; most general psychiatrists should feel comfortable evaluating and treating older adults. Medically complex or cognitively impaired patients and those with a major depressive
episode related to bipolar disorder or with psychotic features associated with their mood disorder may be best suited for geriatric subspecialty care given the risks of medications in this population.
Acknowledgment
Many thanks to Charles F. Reynolds, III, MD, for his contributions to the Depression chapter in the 7th Edition of this book. Material from that chapter on diagnosis of depression, including two figures, has been incorporated into this one.
FURTHER READING
Cameron IM, Reid IC, MacGillivray SA. Efficacy and tolerability of antidepressants for subthreshold depression and for mild major depressive disorder. J Affect Disord. 2014;166:48–58.
Carlson WL, Ong TD. Suicide in later life: failed treatment or rational choice? Clin Geriatr Med. 2014;533–576.
Casey DA. Depression in the elderly: a review and update. Asia Pac Psychiatry. 2012;4(3):160–167.
Donovan RJ, Anwar-McHenry J. Act-Belong-Commit: lifestyle medicine for keeping mentally healthy. Am Lifestyle Med. 2016;10:193–199.
Farina N, Morrell L, Banerjee S. What is the therapeutic value of antidepressants in dementia – a narrative review. Geriatric Psychiatry. 2017;32(1):32–49.
Gueorguieva R, Chekroud AM, Krystal JH. Trajectories of relapse in randomised, placebo-controlled trials of treatment discontinuation in major depressive disorder: an individual patient-level data meta- analysis. Lancet Psychiatry. 2017;4(3):230–237.
Hegerl U, Snhonknech P, Mergl R. Are antidepressants useful in the treatment of minor depression: a critical update of the current literature. Curr Opin Psychiatry. 2012;25(1):1–6.
Kobak KA, Taylor L, Katzelnick DJ, et al. Antidepressant medication management and Health Plan Employer Data Information Set (HEDIS) criteria: reasons for nonadherence. J Clin Psychiatry. 2002;63:727–732.
Lutz JL, Van Orden KA, Bruce ML, et al. Social disconnection in late life suicide: an NIMH workshop on state of the research in identifying
mechanisms, treatment targets, and interventions. Am J Geriatr Psychiatry. 2021;29(8):731–744.
Maslow AH. A theory of human motivation. Psychological Rev.
1943;50:370–396.
National Academies of Sciences, Engineering, and Medicine. Social Isolation and Loneliness in Older Adults: Opportunities for the Health Care System. Washington, DC: The National Academies Press; 2019.
Ostrow L, Jessell L, Hurd M, Darrow SM, Cohen D. Discontinuing psychiatric medications: a survey of long-term users. Psychiatr Serv. 2017;68(12):1232–1238.
Park M, Unützer J. Geriatric depression in primary care. Psychiatr Clin North Am. 2011;34(2):469.
Phelps J. Tapering antidepressants: is 3 months slow enough? Med Hypotheses. 2011;77:1006–1008.
Santini ZI, Jose PE, Cornwell EY, et al. Social disconnectedness, perceived isolation, and symptoms of depression and anxiety among older Americans (NSHAP): a longitudinal mediation analysis. Lancet Public Health. 2020;5:e62-e70.
Schatzberg AF, Blier P, Delgado POL, et al. Antidepressant discontinuation syndrome: consensus panel recommendations for clinical management and additional research. J Clin Psychiatry. 2006;67(suppl 4):27–30.
Shelton RC. Steps following attainment of remission: discontinuation of antidepressant therapy. Prim Care Companion J Clin Psychiatry.
2001;3(4):168–174.
Thielke S, Diehr P, Unützer J. Prevalence, incidence, and persistence of major depressive symptoms in the Cardiovascular Health Study. Aging and Mental Health. 2010;14(2):168–186.
Tulner LR, Kuper IMJA, Frankfort SV, et al. Discrepancies in reported drug use in geriatric outpatients: relevance to adverse events and drug-drug interactions. Am J Geriatr Pharmacother. 2009;7(2):93–104.
Tveito M, Bramness JG, Engedal K. Psychotropic medication in geriatric psychiatry patients: use and unreported use in relation to serum concentrations. Eur J Clin Pharmacol. 2014;70:1139–1145.
Undurraga J, Baldessarini RJ. Randomized, placebo-controlled trials of antidepressants for acute major depression: thirty-year meta-analytic review. Neuropsychopharmacology. 2012;37:851–864.
Unützer J, Carlo AC, Arao, et al. Variation in the effectiveness of collaborative care for depression: does it matter where you get your care? Health Aff. 2020;39(11): 1943–1950.
Unützer J, Katon W, Callahan CM, et al. Collaborative care management of late-life depression in the primary care setting: a randomized controlled trial. JAMA. 2002;288(22):2836–2845.
Wu C-H, Farley JF, Gaynes BN. The association between antidepressant dosage titration and medication adherence among patients with depression. Depress Anxiety. 2012;29:506–514.
Chapter
General Topics in Geriatric Psychiatry
Ellen E. Lee, Jeffrey Lam, Dilip V. Jeste
MENTAL HEALTH ISSUES IN AGING
Relevant Population Demographic Information
The fact that the population in the United States will grow older in the coming decades is now widely recognized. Health care professionals continue to devote increasing time to the management of geriatric patients reflecting the dramatic growth in the old and very old adult populations. The overall structure of the population will also change. Projections by the United States Census Bureau estimate that by 2050, the number of individuals older than 65 years will increase from the 49 million in 2016 to 86 million.
Those in the oldest age group, 85 years and older, will increase from 6.4 to 19 million individuals. Figure 66-1 depicts the rapid growth in individuals 65 years or older between 1900 and 2060. Changes are also projected in racial and ethnic diversity with increases in both the older and the oldest old cohorts over the next four decades. By 2060, the self-reported racial distribution of those 65 and older will become more diverse, with White and non-Hispanic White groups decreasing, and all other groups increasing.
Notably, from 2016 to the projections in 2060, Hispanic or Latino populations will increase their percent of resident population from 18% to 29%, Asians from 6.2% to 9.7%, and two or more races from 2.1% to 6.1%. While female life expectancy and proportion of older women are projected to continue to exceed those of men, this gap is narrowing, which could lead to secondary social and economic changes. Among the 65 years and older age group, the percentage of women will decrease from 56% in 2016 to 54% in 2060. Among those 85 years and older, the percentage of women will decrease from 65% to 61%.
FIGURE 66-1 Number of persons in the United States, 65 years and older 1990 to 2060 (numbers in millions). This chart shows the large increases in the older population from 3.1 million people in 1900 to 43 million in 2012 and projected to 92 million in 2060. Note: Increments in years are uneven. (Reproduced with permission from U.S. Census Bureau, Population Estimates and Projections.)
The federal Administration on Aging data reveal the current living arrangements and incomes for older adults. In 2019, 57% of community- dwelling older adults lived with their spouse (72% of men and 49% of women) and 28% were living alone (19% of men and 35% of women). Over 2% of older adults lived in institutional settings, including nursing homes; however, this number increases substantially with increasing age and rises to 10% for those 85 years and older. Figure 66-2 displays this information separately for men and women. Data from 2018 showed the median income of older persons was $26,000, $34,000 for men and $20,000 for women.
These data varied by race with non-Hispanic Whites averaging a higher
median income than Hispanics, African-Americans, and Asians.
FIGURE 66-2. Living arrangements of men and women in the United States, 65 years or older, 2012. (Reproduced with permission from U.S. Census Bureau, Current Population Survey, Annual Social and Economic Supplement.)
Learning Objectives
Understand the prevalence of mental health and psychosocial problems associated with aging, and their adverse consequences on longevity and quality of life.
Learn about the epidemiology, common clinical presentations, evaluation tools, and management of psychiatric conditions commonly seen in older adults.
Acquire information necessary to recognize suicidal behavior in older adults, best ways to assess suicide risk, and effective strategies to manage suicidal patients.
Understand the principles underlying use of antipsychotic medications in older adults, assessment of their risk-benefit ratio, clinical indications, side effect profile, and monitoring of clinical response.
Learn about the prevalence, diagnosis, and treatment of substance use and personality disorders in older adults.
Key Clinical Points
1. The prevalence of psychiatric diseases increases with aging and exerts adverse effects on longevity, cognition, physical health,
Gain new knowledge about how “successful aging” can enhance emotional and psychological health in older adults, and promote longevity through salutary effects on cognition, physical function, and social interaction.
social interactions, and comorbid illnesses.
Mood disorders are common in older adults and associated with a high risk of suicide. It is important to recognize suicidal behavior and learn how to assess suicide risk and best ways to manage suicidal patients.
Use of antipsychotic medications in older adults is associated with serious adverse effects, including higher death rates in patients with dementia. None of these medications are Food and Drug Administration (FDA) approved for management of behavioral or psychological symptoms of dementia (BPSD), and there is no evidence from randomized trials that they are effective in managing psychotic symptoms.
Anxiety disorders are common in older adults and can present in many forms, including as generalized anxiety disorder (GAD), posttraumatic stress disorder (PTSD), obsessive-compulsive disorder (OCD), or panic disorder (PD). Additionally, anxiety symptoms are often associated with depression, dementia, medical comorbidities, and substance abuse. Psychotherapy and selective serotonin reuptake inhibitors (SSRIs) or serotonin- norepinephrine reuptake inhibitors (SNRIs) are the cornerstone of therapy for most anxiety disorders.
Promoting the principles of “successful aging” and positive psychological traits, such as optimism, resilience, and social engagement, enhances neuroplasticity and improves cognition, physical function, and overall quality of life.
Epidemiology of Psychiatric Disorders in Later Life
Mental well-being in older age is no less crucial than at any other stage of life. The Centers for Disease Control and Prevention (CDC) estimates 20% of individuals older than 55 years suffer from a mental health disorder. In older adults, the burden from Alzheimer disease (AD) and other dementias increases, while the burden of other mental disorders, substance use disorders, and self-harm is reported to decrease over lifetime. However, this
latter trend may be confounded by underreporting and underdiagnosis. The burden of mental health disorders among older adults is exacerbated by the stigma and lack of help-seeking. Figure 66-3 presents additional information about the burden of mental disorders across the lifespan.
FIGURE 66-3. The global burden of mental and substance use disorders, Alzheimer disease and other dementias, and suicide (self-harm) in Disability-adjusted life years (DALYs) across the life course. (Reproduced with permission from Patel V, Saxena S, Lund C, et al. The Lancet Commission on global mental health and sustainable development. Lancet.
2018;392[10157]:1553–1598.)
As the United States shifts from a youth-dependent population to an older-aged-dependent population and with the rise of the burden of neurocognitive disorders among older adults, the health care system must
keep up with the growing demand for geriatric mental health care. The timely recognition, diagnosis, and treatment of neuropsychiatric illnesses are crucial to maintaining and bolstering the quality of life in older adults, but there is a substantial gap between the supply and demand of mental health care professionals for older adults, which is anticipated to only worsen over the
coming years. As health care providers, it will be essential to prepare for these dramatic shifts in population demographics in order to ensure that a proportional growth in geriatric-specific mental health care occurs.
PSYCHOSOCIAL DEVELOPMENT ACROSS THE LIFESPAN
Health-defining, expected psychosocial developmental tasks or goals, sometimes referred to as psychological developmental milestones, vary according to age range. Psychologist Erik Erikson conceived of one of the most widely used and best-known models for human psychosocial development from infancy through old age and the end of life. Just like Sigmund Freud, Erikson believed that personality develops in a series of stages, however, Erikson’s theory represented a marked a shift from Freud’s psychosexual theory. While Freud’s theory of psychosexual development essentially ended at early adulthood, Erikson’s theory described development through the entire lifespan from birth until death and took into account the impact of social experience across the entire lifespan. The importance of Erikson’s contribution to our modern understanding of psychosocial development cannot be overstated. For example, his efforts have inspired modern contemporaries such as George Valiant, author of a number of books in this area including the book Aging Well. In large part, because of the concepts and theories that Erikson espoused, it is now widely understood that adults of all ages, including the oldest old, may benefit from various forms of psychotherapy as long as these individuals have intact short-term memory and are motivated to make changes in their thoughts, beliefs, and behaviors.
In Erikson’s model, the eighth and final stage of psychosocial development is characterized by the core conflict of integrity versus despair. This late adult stage includes those in the oldest age group 65 years and older. During this time, individuals grapple with answering deeper, existential questions such as, “Did I have a meaningful life?” and they reflect back on the life that they had. Wisdom is the “basic virtue” or resolution for this stage. Those who feel proud and satisfied with their accomplishments will master a sense of integrity. They have a sense of wisdom and acceptance when facing death and other end of life challenges. In contrast, someone who fails to master the developmental milestones of this stage may feel that his or
her life has been wasted or lacked meaning and may experience feelings of regret, bitterness, and despair.
Ageism
In spite of the steadily growing interest among scientists, clinicians, and members of the general public, aging and later-life stages are often characterized by negative generalizations, myths, and stereotypes. While in the past, many scholars believed that aging was associated with increases in psychopathology, recent research demonstrates that older adults can be emotionally health, and on average may be emotionally even healthier than younger and middle-aged adults. Depression and anxiety are not normal parts of growing older. The assumptions of inevitable decline in mental health with aging are often propagated by various forms of media and may even be perpetuated by health care professionals whose opinions are inaccurately biased by sampling error, for example, daily professional exposure to only those older individuals who are ill and in distress. In addition, the older individual may also adopt inaccurate understandings or expectations, when a pejorative view of later life obtained through media or some other type of exposure is internalized and enacted. In addition to stigmas associated with aging in general, an even greater stigma exits for people living with a diagnosis of neurocognitive impairment. Perceptions of the relationship between sexuality, aging, and dementia illustrate some of these misconceptions and stereotypes. While some studies have found sexual activity decreases in frequency with increasing age, there is no limit to
sexual responsiveness, and many older people, including those with cognitive impairment, remain sexually active. Furthermore, sexuality has become increasingly important in successive groups of older cohorts. In professional training programs and in clinical care venues, there is a pressing need to challenge these biases and to correct these inaccurate views because of their potential to impact negatively the health and well-being of older adults. For example, because of the ageistic belief that older individuals do not engage in sexually intimate experiences, a clinician may omit questions about the older individual’s recent and ongoing sexual experiences resulting in a missed opportunity to discuss how condom use is not only a form of contraception but also a good method to reduce the risk of sexually transmitted diseases. In addition, ageistic beliefs may be one reason that some clinicians are surprised to learn that many older adults living with
dementia illness are leading meaningful, productive, and satisfying lives and may contribute to missed opportunities to help these older adults achieve the highest quality of life possible in spite of their dementia illness.
Common Emotional and Psychosocial Challenges Associated With Aging With aging, certain mental health and psychosocial issues begin to emerge. Most notably, the later years of life may be characterized by various losses that for many may not have been present in earlier life stages. Some of these losses are specific to the individual such as loss of function, physical limitations, and decline in physical health and cognition. Medical comorbidities, injuries, or pain may limit mobility, while cognitive decline may jeopardize instrumental activities of daily living or other aspects of independence. As these new issues arise, aging adults may draw upon previous patterns of coping or discover opportunities for positive change and growth. Retirement is a common change and a potential challenge that arises later in life. Many look forward to retirement as the golden years of their life. Although retirement may offer opportunities for traveling, spending time with family, nurturing old hobbies, and exploring new ones, it also marks a dramatic role transition and change in identity that may not measure up to idealized expectations and may contribute to social isolation. Many individuals also experience notable losses to their social support system including family members, friends, and peers during these years. In these settings, bereavement and grieving are normal responses to loss, but monitoring for an adjustment disorder, complicated grief, depression, or substance misuse may be warranted. Complex grief affects up to 7% of geriatric patients. It may present symptomatically like a major depressive disorder (MDD) triggered by a loss. An individual with complicated grief may meet criteria for MDD and may require treatment of the depression with pharmacotherapy, individual therapy, supportive counseling groups, referral to a specialist or some combination of these options. Feelings of self-worth are often preserved in normal grief reactions. Symptoms such as pervasive hopelessness, helplessness, guilt, diminished ability to experience pleasure, and suicidal ideation should raise concern for a major depressive episode.
While a longing to be reunited with a deceased loved one or family member is often normal, careful screening and aggressive treatment may be indicated for more well-defined or active thoughts of suicide.
As physical and functional challenges arise in older patients, the need for a caregiver increases. The primary caregiver is often a spouse or adult child of the older patient. The role of a caregiver is wrought with physical, psychological, and emotional challenges when caring for someone with or without dementia. The caregiver may often suffer from significant morbidity and mortality, and new-onset mental health issues may arise in caregivers during this time. Screening for stressors, coping, and social support helps the clinician direct caregivers to helpful resources and information, which indirectly also helps support the patient and family or caregiving unit.
Resources and support groups are often underutilized but effectively offer an opportunity for education, stress management, community, and solidarity.
When resources are available, a hired caregiver may help ease the caregiving burden or may be necessary when support from family or close friends is not available.
Older adults are at a particularly high risk for loneliness, defined as the subjective distress arising from an imbalance between desired and perceived social relationships. Subjective loneliness differs from objective social isolation. Aging-related risk factors include widowhood, physical disability, poor health, and caregiving responsibilities. Several studies have found that loneliness increases with aging and that adults aged in the late 80s were more lonely than 60 to 80 year-old adults. Loneliness is associated with negative mental and physical health outcomes—including alcohol and drug abuse, poor nutrition, sedentary behaviors, poor physical functioning, as well as increased mortality. While the relationship is bidirectional, loneliness predicts development of depressive and anxiety symptoms. Nine longitudinal studies of nearly 41,000 older adults have reported loneliness to be a significant predictor of cognitive decline as well as development of mild cognitive impairment and dementia. Though the underlying biological processes have not been elucidated, increased amyloid-β and tau burden have been observed in cognitively normal older adults who are lonely.
Impact of COVID-19 Pandemic
The COVID-19 pandemic has disproportionately affected older adults. Older populations are at a higher risk of experiencing more severe complications and higher mortality when contracting SARS-CoV-2. Due to these heightened risks, older adults have been required to make important adjustments to their day-to-day lives, including stringent physical distancing measures to
decrease the chances of exposure. Tragic cases of outbreaks in the nursing home settings and senior housing facilities have further restricted the everyday activities of these populations. As preventive social distancing measures have increased, social isolation has also proportionally increased. The COVID-19 pandemic has been particularly isolating to older adult populations given the lower familiarity levels with technologies to facilitate social interactions or virtual visits by family, friends, or even health professionals.
From a mental health perspective, the dynamic nature of the global COVID-19 pandemic has led to uncertainty, stress, and for some, bereavement. Preliminary evidence indicates that the combination of the challenges due to the pandemic has led to worsening mental health outcomes, including increased incidence of depression and anxiety among all age groups. However, contrary to expectations, preliminary evidence demonstrates that older adults have been more resilient, experiencing fewer negative mental health outcomes compared to other ages.
COMMON EVALUATION TOOLS
Cognitive Tests
Cognitive tests are effective tools for screening and diagnosis in ambulatory clinic and inpatient settings when patients are suspected to have cognitive impairment or a neurocognitive disorder. Several different tests are widely used, each with different strengths and weaknesses. Short questionnaires such as the Mental Test Score (MTS) or 6-Item Cognitive IT (6 CIT) are quick to administer but provide limited information. These are used most effectively as screening tools in a population with a high prevalence of cognitive impairment or deficits. The Clock Drawing Test, General Practitioner Assessment of Cognition (GPCOG), Mini-Cog, and Memory Impairment Screen have higher selectivity for neurodegenerative disease and are also quick to administer but similarly provide a limited amount of information.
Multi-domain evaluation tools include the Mini-Mental State Examination (MMSE), Montreal Cognitive Assessment (MoCA), St. Louis University Mental Status Examination (SLUMS), Addenbrooke’s Cognitive Examination (ACE), and Test Your Memory (TYM). These tests provide broader and somewhat more comprehensive picture of the patient’s cognitive abilities and may be best used to detect mild cognitive problems. Disadvantages of these
tools include a longer time to administer with the exception of the TYM, which is a self-administered report. These tools are most helpful when assessing patients who are presenting with known memory problems. They help characterize the type and quantify the severity of cognitive deficits when they exist. While these tests are effective in identifying cognitive deficits, they are limited in their ability to diagnose or determine the etiology and type of a dementia.
The gold standard for diagnosing dementia and assessing neurodegenerative diseases and other cognitive deficits is a neuropsychological battery also called neuropsychiatric testing. This extensive and comprehensive assessment is performed by a specialist and may require at least 3 to 5 hours depending on the case. In addition to a full history and clinical interview, a complete neuropsychiatric battery often includes, but is not limited to, the following: California Verbal Learning Test (CVLT-2), Wechsler Adult Intelligence Scale (WAIS-4), subtests of the Wechsler Abbreviated Scale of Intelligence Second Edition (WASI-2), the Mattis Dementia Rating Scale (DRS), subtests of the Wechsler Memory Scale such as Visual Reproduction and Logical Memory, Test of Memory Malingering (TOMM), the Boston Naming Test (BNT), the Halstead-Reitan Battery, the Wide Range Achievement Test (WRAT-4), the Clock Drawing Test, the Trail Making Tests Part A and Part B, the Verbal Fluency tests from the Delis-Kaplan Executive Function Scale (D-KEFS), and the Wisconsin Card Sorting Test-64 (WCST-64). This type of testing also often includes screens for depression and anxiety symptoms which could be influencing cognitive performance such as the Geriatric Depression Scale (GDS) and Geriatric Anxiety Inventory (GAI), respectively. Other psychiatric screening tools which may be used in conjunction with, or in addition to, neuropsychiatric testing for clinical suspicion of other diagnoses or comorbidities include the PHQ-9 for depression, the Young Mania Rating Scale, GAD-7 for anxiety, PC-PTSD for PTSD, and Y-BOCS for OCD.
Other Evaluation Modalities
Other evaluation modalities, in addition to cognitive testing and screening, help determine the etiology of cognitive deficits or other psychiatric problems. For example, brain imaging, whether obtained from a computed tomography (CT) or magnetic resonance imaging (MRI), may help exclude a neurologic or general medical condition in the differential diagnosis. A brain
CT or MRI may also help establish the etiology of dementia. Specifically, a brain CT or MRI assesses cerebral grey matter volume, cortical thickness, and vascular changes such as those that result from chronic untreated hypertension or atherosclerosis. Referrals to specialists in geriatric psychiatry and neurology may be warranted when considering or interpreting results of these tests. Other diagnostic tests and emerging methods specifically for making a diagnosis of Alzheimer dementia include cerebrospinal fluid (CSF) amyloid and tau, computer-assisted volumetric brain imaging, fluorodeoxyglucose positron emission tomography (FDG- PET), which measures cerebral glucose metabolism, and single-photon emission computed tomography (SPECT), a measure of cerebral perfusion.
At present, Medicare will pay for an FDG-PET in cases when the clinician is not certain whether the patient has Alzheimer dementia or frontotemporal dementia. Future studies seek to identify reliable biomarkers or the combined use of several biomarkers for diagnosing dementia and its subtypes and engaging in prevention and treatment measures early in the course of the disease.
Evaluation for Delirium
Including delirium in the differential diagnosis is appropriate when assessing changes in mentation, confusion, notable behavioral changes from a previous baseline, and new or sudden onset of psychiatric symptoms. An evaluation for delirium involves a clinical interview and diagnostic evaluation of possible medical causes including medical conditions, medication side effects, or substance intoxication and withdrawal among others. The clinical hallmark of delirium is fluctuations in mental status, especially attention and alertness, and the symptoms are potentially reversible. Tools used in cognitive screening such as the MoCA or SLUMS and other cognitive exercises requiring sustained attention are especially helpful in making the diagnosis of delirium.
PSYCHOSIS IN OLDER PATIENTS
Introduction
Psychosis is a general term referring to a mental condition affecting thinking, perceptual experiences, and behaviors in a way that almost always impairs functioning. “Positive” psychotic symptoms include delusions, hallucinations, disorganized speech, and disorganized behavior. “Negative”
psychotic symptoms may include reduced social interaction, reduced facial expressivity, reduced physical activity, reduced thought content, and reduced speech. Psychotic symptoms are common in older adults where they may be related to chronic psychotic disorders, mood disorders, neurocognitive disorders, or a symptom of delirium (ie, related to an underlying medical condition). Psychosis is almost always distressing for both patients and their families, and is associated with decreased quality of life and functioning, more caregiver distress, increased health care costs, and increased risk for placement in long-term care facilities. The most common primary psychotic disorders found in older adults are discussed here; mood disorders and delirium are discussed in detail elsewhere in the text.
Schizophrenia
Schizophrenia is a chronic psychotic disorder that causes significant social and occupational dysfunction. Symptoms of schizophrenia may include delusions, hallucinations, disorganized speech, disorganized or catatonic behavior, and negative symptoms. A diagnosis of schizophrenia requires at least two of these symptoms, one of which must be delusions, hallucinations, or disorganized speech. The symptoms must persist over at least a 6-month period of time and other conditions explaining the symptoms must be ruled out (eg, delirium, mood disorders). The prevalence of schizophrenia is estimated to be 0.6% among adults ages 45 to 64 and 0.1% to 0.5% in older populations. A minority of patients, approximately 20%, have onset of schizophrenia after age 40. Therefore, a majority of older adults with schizophrenia have had an earlier onset followed by a chronic course over many years. Studies suggest that schizophrenia with onset between the ages of 40 and 60, which is described as late-onset schizophrenia (LOS), differs from early-onset schizophrenia (EOS) in several important ways.
Specifically, LOS is generally associated with a lower average severity of positive symptoms (ie, delusions, hallucinations, disorganized speech, or behavior), and lower average antipsychotic dose requirement. Further, the vast majority of patients with LOS are women. Accordingly, some have proposed that LOS is a distinct subtype of schizophrenia. At this time, it is not clear if LOS incidence in aging men will increase due to their gradually increasing life expectancy. However, the term LOS has not been incorporated into the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-5), which states “late-onset cases can meet the diagnostic
criteria for schizophrenia, but it is not yet clear whether this is the same condition as schizophrenia diagnosed prior to mid-life.”
Delusional Disorder
Delusional disorder is characterized by the presence of delusions without other primary symptoms of schizophrenia (eg, hallucinations, disorganized speech or behavior, restricted affect). The disorder typically first appears in middle to late adulthood, with an average age at onset of 40 to 49 years for men and 60 to 69 years for women and is, therefore, more common in older adults than their younger counterparts. A diagnosis of delusional disorder can only be made when all other possible explanations for delusions have been ruled out (eg, delirium, neurocognitive disorders, mood disorders, and schizophrenia). According to DSM-5, the lifetime prevalence of delusional disorder is estimated to be 0.2%.
Psychosis in Neurocognitive Disorders
Psychotic symptoms are common in neurocognitive disorders. For example, it is estimated that 41% of patients with AD will experience psychotic symptoms at some point in the course of their illness. Some common psychotic symptoms in AD include misidentification of caregivers, suspiciousness, and delusions of theft. Hallucinations in AD are less common but may occur. Visual hallucinations are common in Lewy body disease and psychotic symptoms may also be associated with other neurocognitive disorders.
Evaluation
The evaluation of psychotic symptoms in older adults begins with a careful history, ideally by interview with both the patient and a reliable informant such as a family member. The time-course of symptoms can be particularly helpful as patients with chronic psychotic disorders (such as schizophrenia) and mood disorders, which are generally episodic, have a history of symptoms consistent with their diagnosis. Because psychosis is a common symptom of delirium, a medical evaluation should be performed to assess for any possible underlying medical condition. This evaluation may include a physical and neurological examination, laboratory tests, and brain imaging. A careful review of medications that could be contributing to the symptoms should also be done and substance use should be ruled out. The importance
of a thorough medical evaluation cannot be over emphasized. Studies suggest that undetected contributing medical conditions may contribute to as many as 34% of hospital admissions for behavioral symptoms in older individuals.
The possible explanations for this phenomenon have ranged from the influence of ageism (eg, the myth that paranoia is expected in older individuals), the extra time and effort required to obtain immediately useful historical information from older individuals due to normal age-associated changes in communication (eg, increasingly circumstantial speech, delayed retrieval of information stored in memory and slower speech production), discomfort associated with performing certain aspects of the physical examination in an older individual such as pelvic and rectal examinations, or the extra steps and staff needed to perform certain components of the physical examination in an older individual with a neurodegenerative illness, which has taken away the patient’s ability to understand and cooperate with the examiner. In spite of these and other potential barriers to an optimal medical evaluation, accurately diagnosing and optimally treating medical conditions lead to more rapid restoration of the patient’s health and avoid wasting increasingly precious resources. After assessment for contributing medical conditions and medication side effects referral to a geriatric psychiatrist may be helpful.
Management and Treatment
Antipsychotic and othe r medications Antipsychotic medications are commonly used to treat psychotic symptoms in older adults. Antipsychotic medications have been approved by the Food and Drug Administration (FDA) primarily for treatment of schizophrenia. In addition, some are FDA-approved for use in the treatment of bipolar disorder and for use in combination with antidepressants in the treatment of MDD. Antipsychotics are also widely used “off-label” in the treatment of psychosis related to neurocognitive disorders despite limited data to support the long-term safety and effectiveness of these medications in this population. This widespread “off- label” use of antipsychotics reflects the lack of FDA-approved pharmacological alternatives for psychosis associated with neurocognitive disorders and the stress and suffering these symptoms cause for patients, their families, and others such as individuals living in the same residential care environment. In spite of what is known about the risks of prescribing antipsychotic medications to older individuals, it bears remembering that
there are no FDA-approved medications from any class or category for the treatment of BPSD including psychosis nor have there been an adequate number and quality of head-to-head comparison studies of various antipsychotic medications versus medications from other classes or categories to determine if any of the other possibly helpful medications are better at reducing symptoms and/or less likely to cause serious side effects. The American Psychiatric Association Council on Geriatric Psychiatry recently published online a “Resource Document on the Use of Antipsychotic Medications to Treat Behavioral Disturbances in Persons with Dementia.” This document is, perhaps, the most thoughtful, balanced, and helpful summary currently available to help guide clinicians and references recent longitudinal findings indicating that it may actually be the symptoms of psychosis and agitation that lead to increased rates of mortality and institutionalization rather than the antipsychotic medications themselves. In addition, this document emphasizes that nonpharmacological behavioral treatment strategies should be attempted as first-line approaches unless the behaviors are so severe that the patient or others are in imminent danger.
Lastly, this document notes that the currently available evidence demonstrating that nonpharmacological approaches are safer or more effective than antipsychotics in the treatment of dementia-related psychosis or behavioral disturbance is limited.
Despite their widespread use, caution is warranted and the potential benefits and risks of antipsychotic use in each older individual patient must be carefully weighed. Pharmacokinetic and pharmacodynamic changes that occur with age lead to an increased sensitivity to antipsychotics in older individuals. Specifically, decreases in total body water and muscle mass combined with a relative increase in the proportion of adipose tissue result in an increased volume of distribution and slower elimination of antipsychotic medications, while decreased hepatic protein synthesis results in more “free” drug in the circulation. Further, increased permeability of the blood-brain barrier occurring with age can lead to higher concentrations of antipsychotic medications in the CNS. Aging is also associated with decreased synthesis and increased metabolism of dopamine, a decreased number of dopaminergic neurons, and decreased density of dopamine (D2) receptors in the brain. This age-related increase in sensitivity to antipsychotics often translates to lower antipsychotic dose requirements for treatment of psychotic symptoms in older adults and places older adults at
greater risk for antipsychotic-induced side effects such as extrapyramidal symptoms (EPS, eg, parkinsonism). Also of concern in older adults is an increased risk of falls in those taking antipsychotics, likely due to a combination of the autonomic effects, EPS, and sedative properties of the medications.
Because second generation or “atypical” antipsychotics (eg, risperidone, olanzapine, quetiapine, ziprasidone, aripiprazole, paliperidone, iloperidone, asenapine, lurasidone) are generally associated with a lower risk for parkinsonism and tardive dyskinesia than first-generation antipsychotics, they have become popular for treatment of psychosis. Numerous studies, however, have now documented other liabilities associated with atypical antipsychotics including elevated risk of metabolic side effects and the FDA has issued a warning regarding increased risk for cerebrovascular adverse events and mortality in older patients with neurocognitive disorders treated with both atypical antipsychotics and, subsequently, for typical antipsychotics. One study evaluating the long-term safety and effectiveness of atypical antipsychotics in older adults compared four commonly prescribed atypical antipsychotics (aripiprazole, olanzapine, quetiapine, and risperidone) in 332 outpatients older than 40 years with psychotic symptoms related to a variety of psychiatric diagnoses over 2 years of treatment.
Concerning findings included a high 1-year cumulative incidence of metabolic syndrome (36% in 1 year), high rates of both serious and nonserious (51%) adverse events, and no significant improvement in symptoms. Further, over half of the study participants discontinued their assigned medication within 6 months, most often due to side effects (52%) or lack of efficacy (26%). These findings suggest that the commonly used atypical antipsychotic medications may be helpful short-term but neither safe nor effective over longer periods of treatment in middle-aged and older adults.
The use of antipsychotic medications to treat schizophrenia in older adults is largely based on studies conducted on younger adults. Risperidone and olanzapine have been shown to be effective for treatment of psychotic symptoms in middle-aged and older adults with schizophrenia in short-term studies. Single short-term trials suggest that aripiprazole and paliperidone could also be beneficial. It is noteworthy that many older adults with early- onset schizophrenia have fewer and less severe delusions and hallucinations compared to their younger counterparts, and small minority of patients may
experience sustained remission of illness. A reduction in dose or discontinuation of antipsychotic medication, therefore, may be possible in later years in some aging patients with schizophrenia. Their use for treatment of LOS has not been adequately studied. LOS is generally associated with a better prognosis and lower daily antipsychotic dose requirement than early- onset illness. Data regarding the pharmacological treatment of delusional disorder in older adults are limited in part because patients with delusional disorder tend to lack insight and are difficult to enroll in randomized placebo-controlled trials. A survey of 48 experts in geriatric care in 2004 found antipsychotics to be the only recommended treatment for delusional disorder in older adults. Case studies and retrospective studies published since that time continue to support that view.
As noted above, there are currently no FDA-approved pharmacological treatments for psychosis related to neurocognitive disorders. However, given the lack of safe and effective evidence-supported alternatives, atypical antipsychotics continue to play a role in the treatment of some patients with memory illness, particularly when the symptoms are severe (with potential for harm to self or others) requiring aggressive treatment. The risk to benefit analysis in these situations is often complex. Potential benefits may include mitigation of agitation-associated injuries to patients (including falls), staff, and family members. Most dementia illnesses are progressive and the need for antipsychotic may recede as the patient’s illness advances and associated additional brain injury has occurred. The only way to determine if antipsychotic medication is still required is to reduce the dose incrementally and observe whether problem symptoms reemerge on the lowered dose.
Standardized measures of behavioral symptoms in neurocognitive disorders, such as the Pittsburgh Agitation Scale or the Cohen-Mansfield Agitation Inventory, can be useful to monitor the effects of treatment. Finally, it should be noted that patients with Lewy body disease are commonly sensitive to antipsychotic medications and particularly at risk for severe adverse reactions including parkinsonism, and anticholinergic and hypotensive effects. This sensitivity is most commonly associated with the antipsychotic medications, which are known to be potent dopamine receptor subtype 2 (D2) blockers, but it has also been observed and reported even with the very low-potency antipsychotics such as quetiapine.
Educating patients and their caregivers about possible risks and benefits associated with antipsychotic use in older adults and sharing the decision-
making process are critical. Whatever treatment plan is implemented the prescribing clinician should carefully document in the patient’s record the fact that informed consent was obtained from the patient or, when the patient lacks capacity for the decision, the appropriate proxy decision maker for the patient. If antipsychotic medications are used in older adults for any condition, we recommend starting with a low initial dose (25%–50% of that used in a younger patient) and titrating slowly. In patients who have been stably maintained on antipsychotic medications consideration should be given to incremental dose decreases in order to determine the lowest effective dose. Patients should be vigilantly monitored for side effects and to determine whether the prescribed medication is effectively treating their symptoms.
Non-Antipsychotic Medications In clinical practice, mood stabilizers, antidepressants, and sedative hypnotics are frequently used to treat psychosis and associated symptoms such as agitation and aggression, especially in persons with neurocognitive disorders. Yet, none of these medications have received FDA approval for those conditions. Moreover, they carry a clear risk of several major side effects. Therefore, considerable caution is warranted in using these medications in older patients.
Nonpharmacological interventions There are several nonpharmacological interventions which are beneficial for older adults with psychotic disorders. Cognitive Behavioral Social Skills Training (CBSST), a 36-session weekly group therapy program combining cognitive behavioral therapy (CBT), social skills training, and problem-solving training, resulted in improved functioning in of middle-aged and older patients with schizophrenia or schizoaffective disorder compared with a supportive therapy control.
Helping Older People Experience Success (HOPES), a year-long program combining social skills training and preventive health care, was associated with improved community living skills and functioning, greater self-efficacy, and lower levels of negative symptoms in older adults with serious chronic mental illness, more than half of whom had schizophrenia or schizoaffective disorder. Functional Adaptation and Skills Training (FAST), a 24-week functional skills course, was also associated with improvement in functioning and decrease in utilization of emergency medical services in older adults with schizophrenia.
As noted above, data regarding psychosocial interventions for psychosis related to neurocognitive disorders are more limited. Small studies have
reported benefit with interventions such as one-to-one social interaction, support groups, music therapy, dance therapy, aromatherapy, bed baths, person-centered bathing, and muscle relaxation therapy. Thus far studies testing these approaches, however, have been limited by small sample sizes, lack of appropriate control groups, and suboptimal outcome measures. More research is therefore needed in this area including better-powered, well- designed randomized controlled trials. Many studies of psychosocial interventions for behavioral symptoms in patients with neurocognitive disorders report a high placebo response rate suggesting that increased attention they receive as part of a research study is beneficial.
MOOD DISORDERS
Introduction and Suicidal Behavior
Given the preference of many older patients to seek mental health care from their primary care provider coupled with their reluctance to seek care from a mental health specialist, including psychiatrists, the primary care physician often is presented with the very challenging task of assessing and caring for older patients at risk for suicide. Although the emerging trend of “embedding” mental health providers within the primary care clinic environment seems to be an effective method of overcoming the resistance of older individuals to receive care from psychiatrists and other mental health clinicians, for the time being most older individuals, including those at risk for suicide, will be evaluated and treated by primary care providers.
Chapter 65 of this text provides a very helpful summary of mood disorders in older individuals. This section of the current chapter provides additional information about three topics related to mood disorders: suicide, suicide risk assessment, and the treatment of suicidal ideation.
Older individuals account for a significant proportion of death by suicide in the United States. Statistics from the CDC in 2018 indicates that the rate of suicide among all older adults has increased in the last decade. In 2018, the most recent year for which data are available statistics show that those aged 55 to 64 have the highest suicide rates (20.20 per 100,000), with those aged 65 to 74, 75 to 84, and 85 and older also displaying relatively high numbers
(16.31, 18.7, and 19.1 per 100,000, respectively). White males have the highest risk among all adult age groups. Moreover, older adults have the highest rate of completed suicides. With increasing age, the use of firearms
as the method for completed suicide has become more common in the United States.
Suicide Risk Assessment
Establishing rapport is essential when caring for older patients. This is especially true when evaluating and assessing suicide. No matter how skilled the interviewers are, older patients are less likely to disclose thoughts about suicide and self-harm and yet the evaluation is an integral part of the treatment process because it opens up discussion between the patient and clinician, thereby allowing prevention through decreasing access to available means of suicide, building trust, facilitating a supportive therapeutic relationship, and tailoring treatment interventions.
Common barriers to open, effective communication about suicide with older patients include: (1) the belief that people should be able to deal with emotional problems without medical help; (2) the fear of being labeled “mentally ill”; and (3) the fear of being negatively judged by the clinician, fear of being referred to a psychiatrist, and the beliefs that primary care physicians do not understand and cannot help people suffering from depression. Using an empathic approach, the clinician should correct any misinformation. Educating patients and members of their social support system is essential. Tremendous benefits are usually obtained by sharing with the older patients that depressive illness is relatively common, that scientific research has begun to illuminate the underlying physiological basis for the illness and, therefore, views that depression is invariably a sign of character weakness or moral corruption are antiquated, and that scientifically proven, highly effective treatments are now available for depression and associated suicidal symptoms.
All patients with mood, substance use, and psychotic disorders should be assessed for suicide risk. This assessment need not be time consuming, especially when occurring in the context of an established therapeutic relationship. Simply asking questions pertaining to suicide can yield very helpful information. A suicide risk assessment requires the identification of suicidal ideation and intent, risk factors, and protective factors. The most important questions to ask concern present suicidal ideation. Contrary to the common belief, asking about suicidal ideation does not increase the likelihood of a patient subsequently developing suicidal ideation. Often using a “normalizing” statement as preface to questions about suicide is helpful.
For example: “Sometimes individuals suffering from depression contemplate taking their lives. Have you had similar thoughts?” If a patient responds that he or she is having thoughts of suicide, the clinician should ask if any plans have been made. The lethality of plan should be weighed. For example, a plan to shoot oneself represents high lethality, whereas a plan to drink oneself to death represents lower lethality. If a plan has been made, the clinician should inquire if steps have already been made to carry out the plan such as buying ammunition for a firearm, purchasing rope, or stockpiling medications. Asking about a plan helps the clinician get a better sense of the patient’s actual intent to act on the suicidal thoughts.
Another important part of the interview of a patient with depression is the identification of risk factors. A history of suicide attempts represents the most important risk factor for future attempts; however, most completed suicides are not preceded by unsuccessful attempts. Additional risk factors include hopelessness, including having no reason for living or no sense of purpose in life; feelings of helplessness; high levels of pessimism, rage, or anger; a desire for revenge; impulsiveness, increasing use of drugs or alcohol; withdrawal from friends and family; anxiety; agitation; insomnia; diminished “social connectedness” (eg, being single or living alone or living in a rural area); uncontrolled or chronic pain; male sex; and firearm access.
Recent social losses also represent important risk factors, such as the death of a friend or family member, divorce, loss of employment, financial loss, and loss of housing. Worrisome activities include carrying out preparations for death, such as reassigning the responsibility of caring for dependents (children, pets, or a spouse), creating or updating wills, and giving away personal belongings.
Among psychiatric diagnoses, mood disorders carry the greatest risk. The majority of suicide occurs within the context of an active mood episode. The presence of an active substance use disorder along with a mood disorder leads to even greater risk. Physical illness, such as stroke, and disability are also important risk factors for suicide. Generally, the greater is the number of physical illnesses from which an individual suffers, the greater is the risk of suicide. Risk is also elevated the week following admission to, or discharge from, a psychiatric inpatient unit. The impact of neurocognitive disorder on suicide risk is difficult to establish; however, impairment in executive function has been found to be more prevalent in depressed older patients
with a history of suicide attempts compared to those without a history of attempts.
Protective factors are also important to identify. High perceived social support, close interpersonal relationships, feelings of usefulness, perceived ability to achieve goals, a realistic and positive future outlook, and a successful adjustment to aging are examples of protective factors. The presence of these protective factors generally indicates a positive future orientation, meaning that the patient expects life to continue. Additional protective factors include religiousness and spirituality, being married, having children, and having a sense of responsibility to family. Using the above information, a general assessment of risk, should as low, moderate or high should be determined and should be documented in the patient’s medical record. Determining the overall risk for suicide subsequently helps to determine the nature of the treatment that is required. Table 65-1 in the Major Depression chapter (Chapter 65 of this text) outlines the risk factors for suicide as well as protective factors that may decrease the possibility of suicide.
To increase the reliability of the interview, structured suicidal assessment scales can be used. A commonly used suicide risk assessment tool is the Scale for Suicide Ideation, which is a 19-item clinician-rated measure that assess suicidal thoughts and behaviors during the preceding 7 days as well as for the worst moments in the patient’s life as determined by the patient. The Scale for Suicide Ideation thoroughly measures many components of suicide, including suicidal plan, behavior, preparation for an attempt, and anticipation of an attempt. The Geriatric Suicide Ideation Scale is a recently developed, relatively easy to administer scale specifically developed for use with older individuals which has standardized administration and scoring procedures and is sensitive to suicide detection. This scale consists of 66 items and assesses four factors including suicidal ideation, death ideation, loss of personal and social worth, and perceived meaning of life. The SAFE-T is yet another recently developed structured suicide assessment. The SAFE-T reflects the American Psychiatric Association Practice Guidelines for the Assessment and Treatment of Patients with Suicidal Behaviors and consists of the following five steps: (1) eliciting any modifiable risk factors; (2) discovering protective factors that may help reduce suicide risk; (3) inquiring about current suicidal thoughts, behaviors, plans, and intent; (4) weighing factors 1 to 3 to calculate a level
of risk and then identifying potential interventions to reduce risk; and (5) documenting risk, risk mitigation efforts and their respective rationales, and plans for follow-up. The well-written guidelines for use of the SAFE-T are available on the internet. In addition, a free pocket card of the SAFE-T is available on the Suicide Prevention Resource Center website (www.sprc.org).
Management and Treatment of Suicidal Patients
Any patient assessed to be at high, acute risk of suicide should receive an emergent mental health evaluation. If such services are not immediately available, most states have laws permitting emergency hospitalization for psychiatric evaluation of patients who, due to a mental illness, present an imminent risk of harming themselves. In general, various types of law enforcement officers are prepared to assist in securing and transporting patients for such an evaluation.
Managing and treating geriatric patients with suicidal tendencies initially involves three steps. The first is to diagnose and treat the current psychiatric disorder. The second step is to assess the suicidal intent and lethality with an emphasis on prevention. And the third is to construct a specific treatment plan tailored to the patient. Guidelines for managing suicide in adults were provided by the American Psychiatric Association’s (APA) 2003 practice guidelines for the assessment and treatment of patients with suicidal behavior.
Major goals of treatment are to help the patient reduce modifiable risk factors while reinforcing protective factors. For example, prescribing no more than a 1-week supply of medication or less if the patient is taking multiple medications that, in combination in overdose, could be lethal.
Enlisting the help of family members to remove firearms from the home is another valuable intervention especially given that removing the means of suicide is one of the most effective interventions in suicide prevention.
Improving social support through increased supervision from family members and friends, as well as referrals as appropriate to home care and other supportive services, can also markedly diminish suicide risk. In any instance in which a clinician has significant concerns about suicide risk, a referral to a psychiatrist is highly recommended.
For older adults, specific guidelines for treating and managing suicide were provided from the Prevention of Suicide in Primary Care Elderly:
Collaborative Trial (PROSPECT). Table 66-1 shows the PROSPECT general recommendations for working with patients with suicide as well as management techniques for patients at high risk. In addition, guidelines for managing suicidal ideation in adults are provided by the American Psychiatric Association.
TABLE 66-1 ■ PROSPECTA RECOMMENDED GUIDELINES AND MANAGEMENT TECHNIQUES FOR WORKING WITH PATIENTS WITH SUICIDE
Guidelines while workiזig with patieזlts with suicidal ideatiס·n
Be·a:t·tentiv
S·t y cah11.a11d nonthrea:t 11i1ig
Provi,de tlר.e patient witl1spac•e a·nd time to vent
Be co,llaborative) use a tea1-n appr,oacl1
B1e willing t,o say the word /( uicide»
Management tech.niques fסז patient.s with high ris.k
Dire·ctly assess the frequ,ency and content of sui.c'dal
i.deation and 1·isk facto1rs
Explo1 ihe initial p1·oblem
Hav the patient d crib 1·ea.ons for �11d � gainst uicide
Assess the patienrs acce .s to meaתs
Prov:i!de the patien·t with ,edlucatioע.regardtng depre sion
including its etiology prognosisי and t.1·eatment
De.cid how to m nag:,an iתcreas. i11 suilcidal.id - ation th1·ough i·tl1er·a. formal cont1·act ,o,r so,11e othe1·
formality
0 Pro..i:de edttca ion rega.r1di11g a.[coho' l a1וd i]]icit ubstan e a.nd ncourage tl1eir disco,11tin11ation
Meet wi.th l1e pa,tient weekly a 1nmi1nuנn i suזcid,al
i.dea;ttoת 1s p1·eser1t
W1·it - prescription- fo no more than 1 � •k tmtil u"cidal
-isk has d c1·ea.sed
0 Prov·d fam"iy d11cat·,0111·eg.arding suicide, inc.lu.din.g how to appr10p1·1ately resp,ond tס the p.a1tie11t and ass1.1ring the li ing envj1·onrnent is safe (ieי re1111ס,ve firea1·ms)
· P ovid supportive and colla.borative intera.ctio,n with
tb patien·t
aP'ROSPECf. Pזי.eveווtion of.,1i<::ide iנ:נ P·riוזוar'}' Care 'Elderl}1: Coll.aborative Trial. Data f1•an1.8rס�111יGK�Bruce נVfL, Pe1ג rsס11 Jl,. Hi /1-,·i.skזז1a11age111ent gו,1idelir1e.sfסreld�r·ly suicidal patieזוts iזן priז11ary car-e �eזtings. liוt J eriatr &yd1ic1t1·y. 2'001;16(6):593-601.
At eaclו. t1· atנ.nent as s hop les 11 ss� u·cidal ideation� and sub tance ab-u e
More recent reviews have identified collaborative primary-care-based depression screening and management as the intervention with the most evidence for preventing suicidal behavior. Other interventions include treatment with pharmacotherapy or psychotherapy, telephone counseling, and community-based prevention.
ANXIETY DISORDERS
Introduction
Anxiety disorders are the most common psychiatric disorders from which people suffer, and this remains true for individuals in later life. In fact, twice as many older individuals suffer from anxiety disorders as suffer from major neurocognitive disorders and anxiety disorders affect about five times more older individuals than do mood disorders. The nature of anxiety in older adults, however, does tend to differ in characteristic ways from that of their younger counterparts. For example, in older individuals, somatic preoccupation is more common and more intense, and, due to a weakened autonomic response, the intensity of the physical symptoms associated with panic attacks is diminished. Fewer older patients meet full criteria for an anxiety diagnosis, though subthreshold anxiety affects a greater proportion of this population. Prevalence rates for full-criteria anxiety disorders among older adults have been estimated to range from about 5% to 15%, with a 2:1 female predominance.
Generalized anxiety disorder (GAD) represents approximately 50% of all anxiety disorders diagnosed in later life, and about half of all cases of GAD develop after an individual reaches 65 years of age. Specific phobia represents approximately 40% of anxiety disorders in older patients. The most common phobia in older adults is fear of falling. Approximately 60% of older adults with a history of falling and 30% of older individuals with no such history report this fear. PTSD, OCD, and PD represent almost all of the remaining 10% of anxiety diagnoses in older individuals.
Anxiety disorders often present atypically in older individuals including the reported frequency of certain symptoms of anxiety and the manner in which the anxiety symptoms are described. For example, shortness of breath or stomach discomfort may be the only way an older individual may be able to describe feelings of anxiety. In addition to differences in reporting, the diagnostic differential is far broader. A methodical approach to evaluation is
recommended with the first step being to assess whether medical conditions and medications/substances might be mimicking, precipitating, or exacerbating the patient’s anxiety symptoms. Medical conditions known to precipitate anxiety include delirium, chronic obstructive pulmonary disease (COPD), pulmonary embolism (PE), anemia, hypoglycemia, hyponatremia, hyperkalemia, angina, and arrhythmias. If the symptoms of anxiety are determined to indeed be the physiological consequence of a medical condition (such as panic attacks caused by a COPD exacerbation, or anxiety due to a seizure disorder), anxiety disorder due to another medical condition should be diagnosed.
Substance-induced anxiety disorder should be diagnosed if the anxiety symptoms are due to substance intoxication or withdrawal. The substance may or may not be a medication (eg, both cocaine and albuterol are “substances”). To accurately make this diagnosis (and, that is, to exclude a primary psychiatric or medical etiology of the anxiety), establishing timing of symptoms in relation to use of the substance(s) in question is crucial.
Importantly, the anxiety symptoms should not have been experienced prior to use of the substance and the symptoms should not continue in the absence of continued use of the substance. Anxiogenic medications include anesthetics and analgesics, sympathomimetics and other bronchodilators, anticholinergics, insulin, thyroid preparations, oral contraceptives, antihistamines, antiparkinsonian medications, corticosteroids, antihypertensive and cardiovascular medications, anticonvulsants, lithium, antipsychotics, and antidepressants. In addition, alcohol or benzodiazepine withdrawal is commonly accompanied by intense anxiety, as is caffeine, cocaine, and amphetamine intoxication.
Primary psychiatric diagnoses should be considered after a medical etiology has been ruled out. The clinician should seek to ensure as much parsimony in diagnosis as possible. For example, might the pathology of an individual with three specific phobias be better explained by a diagnosis of GAD. The DSM-5 made few major changes to the diagnostic criteria for most anxiety disorders; perhaps the most substantial change was eliminating the requirement that a patient have insight that their anxiety is “excessive.” Instead, the focus is placed on the frequency of the worry and degree of impairment it causes.
Specific Diagnoses
Neurocognitive disorder with feature s of anxiety Anxiety is common in the setting of neurocognitive disorder. In this clinical context, preexisting anxiety disorders may be altered in character or sometimes exacerbated and new-onset anxiety symptoms may arise. Degeneration in various brain regions has been implicated, including the dorsolateral prefrontal cortex, which is involved in modulating the anxious response. A bidirectional effect has been well established in this clinical setting: cognitive impairment tends to worsen anxiety symptoms and anxiety tends to worsen cognition. Just as neurocognitive disorder in the setting of a depressive episode is known to often portend the development of a neurocognitive disorder, so too can new- onset late-life anxiety be a prodrome of neurocognitive disorder. Parkinson disease, with its subcortical pattern of neurodegeneration and characteristic autonomic dysfunction, has been of particular interest because of its high association with not only anxiety but also depressive symptoms. Interestingly, PTSD has been shown to be a risk factor for neurocognitive disorder.
Unfortunately, as a person’s neurocognitive disorder progresses, the benefits of previous successful psychotherapeutic treatment may be lost and problematic symptoms may reemerge. Cognitive declines may result in loss of previously mastered coping strategies. For example, loss of inhibitory control in an individual suffering from PTSD can result in a return of anxious rumination about past traumas.
Depressive disorder with feature s of anxiety Approximately 25% of all older individuals with an anxiety disorder also suffer from MDD, while nearly 50% of older individuals with MDD also suffer from an anxiety disorder. In some individuals, anxiety may be of clinical significance only during depressive episodes, but in others the anxiety is chronic and seems to be an independent risk factor for developing late-life MDD. Those who suffer with both GAD and MDD may be more difficult to treat successfully. Fortunately, in many cases when depression is properly treated, patients will experience an improvement or even remission of anxiety.
Generalized anxiety disorder GAD represents the extreme on the continuum of human anxiety. The core feature is excessive, uncontrollable, an often irrational worry about a broad range of life circumstances. This clinical vignette highlights that in older individuals the focus of their anxiety is often related to their own health, including fears of memory loss and death.
Worries also commonly include the health of loved ones, finances, falls, and loss of independence. The worries are also accompanied by physical
symptoms such as muscle tension, insomnia, irritability, restlessness, being easily fatigued, and difficulty concentrating. The worries severely interfere with the patient’s ability to function and, as a result, many people become more isolated and limit social and other activities.
When evaluating for GAD, it is important to ask about a broad array of age-appropriate topics that might be a source of anxiety, such as health and finances. With the patient’s permission, interviewing friends and family may help to determine if the worry is substantially adversely affecting life quality and to obtain confirmation regarding the frequency and severity of symptoms. If new-onset GAD is suspected, it is recommended that the clinician also check closely for signs of a depressive disorder, as GAD-like anxiety can be a prominent feature of a major depressive episode. GAD is particularly important to diagnose and treat, as it is one of the least likely mental illnesses to spontaneously remit.
Panic disorder A panic attack is a brief, time-limited, well-circumscribed period of autonomic hyperarousal accompanied by physical symptoms (chest pain or palpitations, sweating, tremor) and cognitive symptoms (fear of imminent death or “losing control”). An individual who suffers repeat panic attacks and who develops a fear of future events as well as avoid situations which they associate with panic is said to suffer from PD. Fortunately, PD rarely develops after the age of 65, and, as previously noted panic attacks tend to be less severe. If an individual does develop panic attacks after the age of 65, there is a substantial chance the panic symptoms are related to a medical condition, such as Parkinson disease, exacerbations of asthma, COPD, or angina.
Agoraphobia Agoraphobia literally means “fear of the marketplace” and an individual suffering from this disorder tends to avoid public places because of fear they will be unable to escape should they develop a panic attack or other embarrassing symptoms (such as incontinence). The condition often develops after an event which the sufferer interprets as being traumatic, such as a myocardial infarction or fall-related injury. Stroke has been associated with late-life onset of agoraphobia. In DSM editions prior to the DSM-5, agoraphobia was linked with PD, but this is no longer the case. In DSM-5, the two are now recognized as distinct, but often overlapping, entities.
Posttraumatic stress disorder Following exposure to a traumatic (generally life threatening) stressor, the development of specific, characteristic symptoms
defines PTSD. Briefly, the affected individual involuntarily reexperiences the event (via thoughts, flashbacks, or dreams), avoids situations that induce memories of the event (but may also experience generalized detachment from life), and develops abnormal symptoms of arousal/reactivity, such as an exaggerated startle response or irritability.
Compared to the general population, older adults are less likely to meet the full criteria for PTSD; in particular, fewer symptoms of hyperarousal and avoidance have been noted. Life events associated with aging, such as the death of a spouse, financial and physical decline, and chronic pain may cause reemergence of quiescent symptoms of PTSD (called “delayed PTSD”).
Obsessive-compulsive disorder Unwanted, persistent obsessions and their repetitive, often ritualized compulsive responses, form the core of OCD. Most cases of geriatric OCD represent a continuation of an illness that most commonly has an onset in the teen years; in other words, new onset of OCD is quite rare in the geriatric population. Interestingly, brain lesions, especially in the basal ganglia, have been detected in those rare individuals who develop late-onset OCD, suggesting neurodegeneration and other brain injury as possible mechanisms.
Social anxiety disorder and illness anxiety disorder A fear of negative evaluation and the anxiety, avoidance, and other responses to social situations that follow characterize social anxiety disorder (SAD). SAD appears to decrease slightly in prevalence with age, as do the precipitating circumstances.
Common concerns include incontinence, anxiety related to poor hearing or because of problems remembering people’s names and embarrassment about personal appearance. Illness anxiety disorder, previously referred to as hypochondriasis, is characterized by worry about having a serious illness.
One study has shown it to affect about 3% of the visitors to primary care settings. Individuals are often unconvinced by their physician’s reassurances that, in fact, do not have such an illness.
Specific phobias and othe r specified anxiety disorder The essence of a specific phobia is fear or anxiety about circumscribed objects or situations that is out of proportion to the actual risk posed, coupled with avoidance behavior. Other specified anxiety disorder is diagnosed when the anxiety features do not meet full criteria for a specific anxiety disorder, such as panic attacks with less than four (of the possible 13) associated symptoms. Unspecified anxiety disorder (previously “anxiety disorder NOS”) is most commonly given when
an anxiety disorder is suspected, but further information and work-up would be needed to arrive at a more conclusive diagnosis (such as in emergency room settings). Adjustment disorder, with anxiety, should be diagnosed when the reaction to a stressor is out of proportion to reaction that would normally be expected.
Overview of Treatment
Psychotherapy is mentioned here first because, while it requires more time to administer and may present additional difficulties in the geriatric population, it is largely free of adverse side effects. The most rigorously studied modality is CBT. However, while the CBT literature consistently documents a positive response relative to no treatment, response rates in the older population, generally, are lower than for younger adults. Further, there is no consistent evidence that CBT provides greater benefit than other therapy modalities (eg, supportive therapy). Psychotherapy represents the prominent mode of treatment of agoraphobia, SAD, and illness anxiety disorder, and is used in conjunction with pharmacotherapy in the treatment of other anxiety disorders. Complementary therapy is also useful and largely without adverse side effect; examples include biofeedback, progressive relaxation, acupuncture, yoga, massage therapy, art, music, dance therapy, meditation, prayer, and spiritual counseling. A phobia of falling is first treated by accurately assessing the risk of falls and instituting fall-risk precautions, as well as vision optimization, weight-bearing exercise, and balance training such as Tai chi.
In terms of pharmacological management, the treating clinician should first consider which of the patient’s medications might be removed. Older patients are particularly at risk for various forms of suboptimal prescribing, including unnecessary polypharmacy which, even in the absence of agents well-known to cause anxiety, may provoke subclinical delirium and associated anxiety symptoms. For example, a patient with overactive bladder suffering from anxiety might first be treated by testing the efficacy of a decreased dose of oxybutynin before prescribing an anxiolytic medication.
The cornerstone of pharmacological treatment is similar across anxiety disorders. Current evidence most strongly supports the use of an SSRI or SNRI at a dose higher than would be usually used to treat a depressive disorder. The recommended starting dose in geriatric individuals is generally half that recommended for a younger adult. The maximal antianxiety effect of
an SSRI generally is achieved after 6 to 12 months of treatment with a therapeutic dose. In treating GAD, there is some evidence that buspirone is effective, but generally requires 2 to 4 weeks to reach maximal efficacy.
Gabapentin and to a lesser extent pregabalin have also be studied and appear to be helpful. These medications can be given on a PRN basis, have little habit-forming potential and also possess analgesic properties. PD is typically treated with a high dose SSRI combined with psychotherapy (typically CBT and exposure therapy). Surprisingly few rigorous studies of pharmacologic treatment of PTSD have been conducted in older adults.
Evidence of efficacy has been shown in individual studies of citalopram, mirtazapine and, for sleep-related PTSD problems only, prazosin. Caution is advised with the use of tricyclic antidepressants (TCAs), given their known cardiotoxicity and anticholinergic side effects (confusion, sedation).
Benzodiazepines are also to be used with great caution in older patients, given their negative impact on cognition, tendency to cause physiologic dependence, increased risk of falls and risk of driving impairment.
Prescribing benzodiazepines as a knee-jerk reaction to a complaint of anxiety also can reinforce a message to patients that anxiety must be immediately relieved. If benzodiazepines are used, there is only infrequently an indication to continue for more than 6 weeks. In addition to drug–drug interactions, SSRIs carry risks of hyponatremia, bleeding (particularly GI bleeding, and especially when used in conjunction with an NSAID), and may rarely cause parkinsonism and akathisia.
SLEEP DISORDERS
Introduction
The DSM-5 defines insomnia as “a predominant complaint of dissatisfaction with sleep quantity or quality,” associated with at least one of the following: difficulty initiating sleep, maintaining sleep, and/or early morning wakening. The symptoms must cause “significant distress or impairment in functioning” and must occur at least 3 nights per week for at least 3 months.
In a survey of 9000 adults older than 65 years, more than 80% reported at least one problem with sleep, and more than half reported that at least one sleep complaint occurred nearly all the time. Poor sleep is distressing to patients and associated with poor outcomes including depression, anxiety, falls, mortality, long-term care placement, and decreased quality of life.
Polysomnography (PSG) studies have confirmed age-related changes in sleep including increased fragmentation of sleep. The proportion of time spent in the lighter stages of sleep (stages 1 and 2) increases with age, especially between early adulthood and midlife, and there are reductions in both the amount of slow-wave (deep) sleep and rapid eye movement (REM) sleep.
Most sleep disturbances in older adults, however, are not a normal part of aging. Sleep complaints are often comorbid with medical and psychiatric conditions, and a number of medications and substances are known to cause problems with sleep. Disturbed sleep is, therefore, often a multifactorial problem and careful assessment for all possibly contributing factors is required before concluding the patient’s sleep complaints represent expected age-associated changes in sleep.
Older adults often have medical or psychiatric conditions that contribute to sleep problems. Conditions that cause pain or discomfort, respiratory symptoms, or nocturia are common causes of sleep disturbance.
Gastroesophageal reflux disease, diabetes, cancer, neurodegenerative diseases, and mental illnesses are associated with high rates of sleep disturbance. It is also important to consider the effects of medications in older adults who are often taking multiple prescription and over-the-counter medications. Medications that may cause insomnia include stimulants, antihypertensives, bronchodilators, corticosteroids, decongestants, diuretics, and many antidepressants. In addition, sedating medications can cause daytime sleepiness and napping, disrupting nighttime sleep.
Sleep-Disordered Breathing
Breathing abnormalities occurring during sleep range from snoring to partial (hypopnea) or complete (apnea) cessation of airflow. These events may be caused by airway collapse during sleep or impaired central nervous system signaling and can lead to hypoxemia and repeated arousals throughout the night. A diagnosis of sleep-disordered breathing (SDB) is made when the total number of apneas plus hypopneas per hour is more than 5 to 10.
Obstructive sleep apnea (OSA) syndrome is SDB with excessive daytime sleepiness. SDB is more common in older than younger adults. Young and colleagues reported the prevalence of SDB in 5615 adults older than 60 years to be 32% for those ages 60 to 69, 33% for ages 70 to 79, and 36% for
ages 80 to 99.
Periodic Limb Movements in Sleep and Restless Legs Syndrome
Periodic limb movements in sleep (PLMS) are repetitive movements that can disrupt sleep usually by preventing the patient from falling asleep. The prevalence of PLMS may be up to 45% in adults older than 65 years.
Restless legs syndrome (RLS) is a neurologic condition in which patients experience an urge to move the legs or arms, usually accompanied by a dysesthesia that is relieved or partially relieved by movement. The symptoms usually occur at rest or are worse during periods of rest and tend to be worse during the evening. RLS can be secondary to anemia or end-stage renal disease, and there is a growing body of evidence linking other chronic illnesses with RLS, including rheumatoid arthritis, chronic obstructive pulmonary disease, asthma, fibromyalgia, diabetes, Parkinson disease, and multiple sclerosis. The prevalence of RLS is between 8.7% and 19% in older adults and increases with age.
REM Sleep Behavior Disorder
REM sleep behavior disorder (RBD) is a condition in which the skeletal muscle atonia normally present in REM sleep is intermittently absent, resulting in movements such as punching, kicking, waving, or yelling during REM sleep. RBD typically affects older adults, mostly men. Associations between RBD and neurodegenerative diseases, including Parkinson disease, major neurocognitive disorder due to Lewy body disease, and multiple- system atrophy, have been established and RBD may precede the onset of the clinical neurologic symptoms by years.
Circadian Rhythm Disturbances
Ageing is associated with less robust circadian rhythms which can result in weaker sleep rhythms. Exposure to synchronizing cues changes with age.
Older adults are exposed to less bright light, particularly those with neurocognitive disorders and those living in nursing homes. Less light exposure is associated with more awakenings. Nocturnal secretion of melatonin decreases with age. The timing of the sleep wake rhythm advances and sleep and wake times are routinely several hours earlier than conventional sleep and wake times. This phase shift causes problems if sleepiness in the early evening prevents individuals from engaging in activities they enjoy. Those who stay up may become tired when their wake time remains advanced. Falling asleep in the early evening, (napping) can
lead to difficulty falling asleep at the desired and usual bedtime. In these cases, advanced sleep phase disorder is diagnosed. The prevalence of the disorder in older adults has not been established but age is a risk factor.
Evaluation
Because treatment strategies differ depending on the etiology of the sleep disturbance, a careful and systematic approach to diagnosis is recommended. Assessment of sleep disruption should begin with a detailed sleep history. It can be helpful to have the patient complete a sleep diary and to get input from a caregiver who may be aware of symptoms (eg, snoring, gasping for air, limb movements) of which the patient is not aware. A medical history may reveal comorbid conditions. Both prescription and over-the-counter medications should be carefully reviewed. In addition, patients should be asked about caffeine, alcohol intake, and the use of recreational and illicit substances such as marijuana and methamphetamine. Primary insomnia is a diagnosis of exclusion when all other potential causes have been investigated and ruled out.
Older adults with SDB may present with symptoms similar to younger adults such as loud snoring, excessive daytime sleepiness, nonrestorative sleep, gasping arousals, witnessed apneas, dry mouth, or headaches upon awakening or may present with complaints of insomnia, nocturnal confusion, cognitive difficulties, enuresis, nocturia, or falls. Although body mass index is a risk factor, older adults with OSA may not be obese. Diagnosis is confirmed by PSG. Patients with PLMS may be unaware of them and may present with other complaints such as difficulty falling asleep, staying asleep, or excessive daytime sleepiness. The diagnosis of PLMS requires an overnight PSG. The diagnosis of RLS can be made on the basis of history.
Patients can be asked about unpleasant, restless feelings in their legs, especially in the evening, that are relieved by walking or movement. In patients with significant cognitive impairment, signs of discomfort in the legs such as rubbing or kneading the legs and increased motor activity that are present or worse during inactivity and are lessened by activity may be observed. Exclusion of other conditions that may be mistaken for RLS, such as neuropathy, arthritis, akathisia, pruritus, vascular insufficiency, and anxiety, is important. The RBD screening questionnaire is a 10-item instrument asking questions about dreaming and movements which can be helpful in the assessment of RBD. The relationship between REM sleep and
reported complex motor behaviors can be confirmed with PSG that includes video recording. In the assessment of circadian rhythm disturbances, examination of sleep patterns recorded in a sleep diary can be helpful.
Objective assessment of sleep (eg, with actigraphy) can also be helpful but is not required.
Management and Treatment
Treatment of comorbid medical and psychiatric conditions should be optimized, medications that contribute to sleep disturbances avoided, and caffeine and alcohol consumption limited. Treatment of SDB begins with education. Weight loss, in appropriate patients, can yield a reduction in the severity of SDB. Continuous positive airway pressure (CPAP) is the gold- standard treatment. Respiratory depressants, which can increase the duration of apneas, should be avoided.
Treatment of secondary RLS necessitates treatment of the underlying disease. Otherwise treatment of RLS/PLMS in older patients involves the careful use of pharmacologic agents. Dopamine agonists (eg, ropinirole and pramipexole) have been shown to reduce the number of kicks and arousals and are the preferred therapy for RLS/PLMS in older patients. Treatment of RBD includes education of the patient and bed partner. Patients and their bed partners can consider taking measures to avoid being injured such as the use of bedrails or sleeping in separate beds. The long-acting benzodiazepine clonazepam is often used for treatment of RBD; however, older adults are more at risk for daytime sedation, falls, and cognitive symptoms with benzodiazepine use.
Evening light exposure has been found to delay circadian rhythms. Some, but not all, studies have shown improvements in objective sleep measures, while a subjective improvement in sleep is consistently observed. The optimal time for light exposure is between 7:00 PM and 9:00 PM, so light boxes are helpful, especially during the time of year when the length of daylight is the shortest. It can also be helpful to minimize morning light exposure by the use of sunglasses when outside in the first half of the day.
For primary insomnia, nonpharmacologic interventions, including education about sleep hygiene, are preferred in older adults. Cognitive behavioral therapy for insomnia (CBT-I) combines cognitive restructuring with one or more behavioral interventions (eg, stimulus control, sleep restriction, relaxation techniques). CBI-I is as effective as medications for
short-term treatment of insomnia and is associated with better long-term outcomes in older adults.
In older adults it is especially important to consider potential side effects and medication interactions prior to considering pharmacological treatment. Long-acting hypnotics can cause excessive daytime sleepiness, impaired motor coordination, impaired cognition, respiratory depression and may be associated with tolerance and withdrawal symptoms upon discontinuation.
There are data to support the short-term effectiveness of nonbenzodiazepine receptor agonists (zolpidem, zolpidem MR, zaleplon, and eszopiclone) in older adults. Eszopiclone and zolpidem MR are FDA-approved for long- term use; however, safety concerns remain including increased risk for falls, cognitive symptoms, driving accidents, and psychiatric disturbances. Several studies support the effectiveness of ramelteon, a melatonin receptor agonist, in older adults with insomnia. Ramelteon is not extensively metabolized by the cytochrome P450 CYP3A4 isoenzyme, reducing the potential for drug– drug interactions but is metabolized by first-pass metabolism, which should be taken into account for patients with hepatic impairment. In practice, it is common to see medications from other classes prescribed for insomnia (eg, antihistamines, antidepressants, antipsychotics, anticonvulsants). There is no systematic evidence to support the effectiveness of these medications, while they are associated with risks in older adults.
SUBSTANCE USE DISORDERS
Introduction
According to the 2018–2019 National Survey on Drug Use and Health, estimates indicate about 62% use of alcohol, 21% use of tobacco products, 9.5% use of marijuana, and 0.6% use of cocaine in adults older than 50 years in the prior year. While other substances have not been examined closely in the older population, some estimates place hallucinogen use at 0.3%, inhalants at 0.2%, and methamphetamine at 0.4%. These studies also suggest a gradual increase in prevalence rates of drug misuse as younger cohorts age. Today an estimated 5.7 million older individuals need treatment for substance use disorders. According to this same survey, 3.4% and 1.1% of adults older than 50 years had alcohol use disorder and illicit drug use disorder, respectively. Unfortunately, substance misuse among older adults has not led to increasing emphasis on substance treatment for older adults.
While these rates are lower than those in younger adult population, substance use disorders in older adults are often overlooked, understudied, underdiagnosed, and undertreated. Among the many suggested reasons for this is ageism and the associated inaccurate beliefs that the prevalence of substance use among the aging population is low and that the exploration of possible alcohol and substance misuse is not routinely indicated in older adults.
Alcohol Misuse
Alcohol is the most misused substance across the lifespan, the most well- studied substance and the substance most associated with an increased rate of other substance use in the older population. The National Institute on Alcohol Abuse and Alcoholism recommends no more than three servings of alcohol in 1 day and seven servings per week for older adults, both male and female.
Approximately 16% of men and 11% of women older than 65 years meet this criterion for at-risk drinking. Research also indicates that these individuals are more likely to suffer from associated mental health issues such as depression and anxiety.
The impact of alcohol on the aging body should not be underestimated. Older adults experience significantly higher blood alcohol concentrations from a given quantity of alcohol as compared to younger adults; therefore, a safe amount of alcohol use significantly decreases with age. In addition to the normal age-associated physical and physiological changes, a higher rate of comorbid medical issues and the greater use of various medications in older adults also significantly increase the risk for serious adverse consequences from alcohol consumption in older individuals.
The best treatments for alcohol misuse are prevention, early screening, and early intervention. Screening for alcohol use should be routinely conducted in the context of a thorough medical and psychiatric history and physical. Special attention should be paid to the onset of use, the current and past use pattern, the frequency of use, the indications for tolerance, and any evidence of withdrawal symptoms. Collateral information from family members or significant others is important as older adults may not report accurate alcohol use patterns due to impairment from use itself or from cognitive deficits. Physical signs of excessive alcohol such as an enlarged or tender liver, bilateral symmetric tremor of the hands, especially when the arms are extended, or changes in the skin such as red palms or acne rosacea,
the latter of which is not caused by alcohol use but may be significantly worsened by alcohol. Helpful laboratory tests include elevated mean corpuscular volume, γ-glutamyl transpeptidase level, and aspartate aminotransferase levels. After chronic use, elevated carbohydrate-deficient transferase, serum uric acid, and decreased albumin levels are often seen.
The CAGE questionnaire for alcohol use has variable sensitivity and specificity in older adults. Two 10-item questionnaires that have been specifically validated in older adult populations include the Short Michigan Alcohol Screening Test–Geriatric Version (SMAST-G) and the Alcohol Use Disorders Identification Test (AUDIT).
Alcohol contributes to significant morbidity in older adults. Isolated, often binge drinking episodes can lead to increased falls, confusion, poor executive functioning, and other traumatic events. Long-term, chronic use can lead to multiple organ system damage. Alcohol use in the older adult often accompanies depression and anxiety. In fact, often alcohol use treatment must take into account concurrent mental health disorder treatment. Notably, alcohol use is the second most common category of psychiatric risk for suicide attempts and completions. Additional consequences of alcohol use are sleep pattern changes and both short-term (black outs) and long-term (dementia) cognitive deficits.
If formal treatment is indicated, a combination of nonpharmacological and pharmacological treatment routes should be considered. Unfortunately, older-age-specific studies of nonpharmacological interventions for alcohol misuse have been characterized by small sample sizes and have yielded unclear outcomes. Pharmacological options for alcohol use are designed to either decrease craving or negatively reinforce use. Naltrexone is the most well-studied medication for alcohol use disorder in older adults and is an opioid-receptor antagonist that decreases craving by attenuating pleasure derived from alcohol use. Acamprosate is another drug with limited efficacy demonstrated in older adults that is believed to affect the reward pathway in the brain, also reducing craving. Disulfiram inhibits aldehyde dehydrogenase leading to uncomfortable accumulation of acetaldehyde, which results in flushing, sweating, nausea, and other uncomfortable symptoms. Disulfiram, however, is seldom used in older adults due to significant side effects such as changes in blood pressure and heart rhythm that can complicate heart disease and a number of significant drug–drug interactions due to inhibition
of the metabolism of a number of medications including warfarin, phenytoin, isoniazid, and some benzodiazepines (eg, diazepam).
One of the most important aspects of alcohol misuse management in older adults is the accurate recognition and appropriate treatment of withdrawal.
Alcohol withdrawal is especially dangerous in older patients due to the increased likelihood of seizures, irreversible damage to the brain, delirium, and other serious consequences. In the context of possible alcohol withdrawal, clinical assessment must be detailed and special attention must be paid to comorbid medical conditions and associated medication use.
Supportive therapy must be initiated immediately to replace volume loss, correct electrolyte abnormalities, prevent falls, and correct vitamin deficiencies. The standard of care regarding medication to mitigate withdrawal symptoms and to prevent delirium tremens is a benzodiazepine. Unlike in younger adults, chlordiazepoxide is not the preferred or common choice of benzodiazepines. Lorazepam, oxazepam, or temazepam are preferred for older patients due to their intermediate half-life, relatively low lipid solubility, which minimizes undesirable accumulation in the lipid compartment, and their major biotransformation pathway of conjugation with glucuronic acid resulting in 70% to 75% of the administered dose being excreted as the glucuronide conjugate in the urine. This last characteristic allows for their use even in patients with severe hepatic dysfunction. Even with these advantages for the older patient, the use of one of these benzodiazepines must proceed with great care in order to avoid compounding the problems of withdrawal by increasing risks associated with excessive sedation and delirium including falls and respiratory sedation. The combined effect of the benzodiazepine with other medications being used by the older patient must be continuously assessed.
Benzodiazepine Misuse
While benzodiazepines are integral in treatment for alcohol withdrawal, they are also often misused in the older population. Commonly they are over prescribed and dosed above the amount required for the intended indication. Even when initially started at appropriate doses, benzodiazepines are continued without a definitive end point of use, which often leads to increased tolerance.
One of the most common reasons for benzodiazepine prescription is insomnia. As discussed in the section of this chapter devoted to sleep
disorders, the causes and contributors to insomnia in older adults are various and include gender (higher risk for women), relationship status (single, widowed, or divorced), medical (multiple medical conditions), cognitive (dementia), polypharmacy, poor sleep hygiene, and mood disturbances.
Despite these numerous possible etiologies of insomnia, benzodiazepines are often inappropriately used as the first-line agent for sleep complaints.
Another common reason for benzodiazepine prescription is for the treatment of anxiety. As with insomnia, causes of anxiety are often multifactorial and benzodiazepines are not be the ideal treatment in most instances. Agarwal and Landon reported increasing rates of outpatient benzodiazepine prescribing over a 12-year period, yet Gerlach and colleagues found limited evidence of benefits from benzodiazepine usage in older adults. Over time many patients require increasing doses to achieve anxiety symptom abatement, which leads to complications, adverse events, and ultimately, the potential for dependence. As with alcohol, older adults have increased sensitivity to benzodiazepines, as metabolism is significantly slower in older adults. The risk for cognitive impairment, delirium, falls, sedation, and other unwanted effects are elevated. With physiological dependence setting in, benzodiazepine withdrawal mimics that of alcohol withdrawal and puts the older adult at similar risk as that of alcohol misuse. The evaluation and treatment of benzodiazepine withdrawal is virtually identical to that of alcohol withdrawal. To prevent benzodiazepine overuse, misuse, and complications, clinicians should carefully elicit the etiology of the sleep or anxiety complaints in order to find the optimal treatment and to minimize the use or benzodiazepines.
Opiate Misuse
Increased opioid misuse in older age cohorts has become an increasingly worrisome trend. Older adults are the most likely age group to be prescribed opioids long-term. A disturbing trend in the United States is opiate pain medication misuse. The most common rationale for opiate use is the treatment of chronic pain. Although the undertreatment of pain continues to be a problem for older adults, there is no evidence that sustained, long-term use of opiate-family pain medications leads to better outcomes. Opioids increase the risk of psychological and physiological tolerance. Furthermore, as older adults are more sensitive to medications, opiate misuse often causes
sedation, constipation, cognitive impairment, and respiratory suppression, especially in context of polypharmacy and concurrent alcohol use.
Even with these unacceptable side effects, opioids remain one of the most potent pain-relieving medications. There is great controversy over appropriate guidelines on when and how to prescribe these medications. Many of the guidelines focus on curtailing long-term use, appropriate selection of drug and dose, and developing strategies for continual screening and monitoring.
Clinicians can prevent opiate misuse by maximizing the use for alternative treatments for pain in older adults. These other treatments can be tailored to each patient by examining the causes for the pain complaints and by not treating pain as a single entity unto itself. Even the medication alternatives to opiates for pain control also have potential serious adverse reactions and side effects. For example, overuse of NSAIDs can lead to gastropathies and TCAs are not well-tolerated in the older population.
Considering acetaminophen as the first-line agent for pain control is recommended. Additionally, consideration should be given to SNRIs, gabapentin, and pregabalin as treatments for chronic pain. In context of opiate dependence, the clinician is advised to consider community substance use programs that include peer support and educational groups. The use of methadone, naltrexone as buprenorphine as treatments for opiate dependence has not been well studied in older populations.
Marijuana Misuse
Rates of marijuana use and misuse have increased substantially in the last few years. While marijuana use in adults older than 65 years in the United States was 0.4% in 2006, this rate has increased to 5.1% in 2019. The prevalence of marijuana use will likely continue to increase in older adults given marijuana use prevalence in younger cohorts and the growing movement toward marijuana legalization in the United States.
While there have been dramatic increases in marijuana use rates among older adults, scientists still know little about its effects on cognition, drug interactions, pharmacokinetics, and risk for adverse effects in this age cohort. In the older adult population, the full health impact of chronic marijuana use has yet to be determined, but recent studies indicate that marijuana negatively impacts cognitive function. Contrary to previous expectations, there is no adequate evidence to support the belief that marijuana can serve as an
effective treatment for dementia or for symptoms of depression or anxiety. At this time, clinicians are recommended to screen for and provide counseling for possible marijuana misuse in older adults.
Misuse of Other Substances
Older adults also commonly misuse tobacco, with 8.2% of adults older than 65 years engaging in current cigarette smoking behavior. Despite widespread cigarette smoking prevention and cessation efforts, among older adults, misconceptions about nicotine use persist. For example, some continue to believe that in older adults, smoking cessation does not yield any benefit and that smoking can help with pain, mood, and cognition. None of these are validated nor should be reasons to omit screening for smoke cessation when caring for an older adult. Additionally, smoking is commonly related to comorbid anxiety and other substance misuse. Behavioral modifications can be very helpful for smoking cessation. Pharmacological methods for cessation include nicotine replacement therapy and medications designed to impact the reward pathway for smoking. Bupropion and varenicline have good evidence for smoking cessation in adults but evidence for their effectiveness in older adults is lacking. Bupropion is also potentially mood elevating and may lead to increased anxiety symptoms in some patients. For varenicline, there is a widely publicized negative effect on mood and potential to increase suicidal behavior. With a high prevalence rate of co- occurring anxiety and depression in older adults with substance use issues, it is important that varenicline’s potential negative effect on mood and suicidal behavior is closely monitored by the clinician.
PERSONALITY DISORDERS
Research in personality and personality pathology in older cohorts is an emerging field primarily driven by the increasing number of older individuals, but already some patterns and key findings have emerged.
Personality traits are enduring, consistent behaviors forming a unique profile that distinguishes individuals from each other and predicts an individual’s response to experiences or stressors. These traits are apparent as older adults respond to normal milestones and pathology associated with aging.
Because personalities are defined by long-standing patterns of behavior, global personality, and personality structure tend to remain stable with aging; however, neurologic or degenerative disease and brain injury in addition to
positive changes associated with aging can lead to observed personality changes and shifts. For example, traits such as neuroticism, extraversion, and openness tend to decrease with age, while altruism and conscientiousness increase with age.
Personality disorders are organized into three groups or clusters in the DSM-5. Cluster A disorders include paranoid, schizoid, and schizotypal personality disorders. They are characterized by bizarre, eccentric behavior, and magical or delusional-type thinking. The cluster B disorders include antisocial, borderline, histrionic, and narcissistic personality disorders.
Cluster B disorders are characterized by impulsive behaviors, interpersonal instability, and unstable or labile affect. The cluster C disorders are avoidant, dependent, and obsessive-compulsive personality disorders.
Disorders in this cluster are driven by an anxious diathesis that leads to worry, fear, rigidity, and isolation.
The prevalence of personality disorders in the general adult population is about 12%. While personality disorders in older adults have not been well studied, large epidemiological surveys have found the prevalence of at least one personality disorder in older adults ranging from 11% to 15%. In certain geriatric subpopulations, however, the prevalence may be higher. For example, in geriatric patients with MDDs and dysthymia, up to a third also have a personality disorder, most commonly avoidant, dependent, or other cluster C disorders and traits. In older adults, the odd traits associated with cluster A personality disorder and the anxious behaviors of cluster C personality disorders are more common that the impulsive behaviors seen in cluster B personality disorders, which are more common in younger adults.
Of those older adults in the psychiatric outpatient population, anywhere from 5% to 33% meet criteria for personality disorder. Rates increase in geriatric patients receiving inpatient mental health treatment with prevalence rates ranging from 7% to 80%. In some cases, the impact and severity of symptoms in those with a known diagnosis of a personality disorder may worsen later in life. For example, an exacerbation of symptoms and behaviors may occur in the setting of significant social stressors such as the loss of a supporting person or other stabilizing environmental factors such as a job. Currently, prevalence data in older age groups are limited by the current methods and assessment tools used in epidemiologic studies directed at personality and personality disorders, which may inadequately correlate to the specific behaviors in older individuals. Further research and attention to this
emerging field is needed to better understand the manifestation of traits and diagnosis of personality disorder in older adults. Additional disorders outside of this classification system include personality change due to another medical condition, other specified personality disorder and unspecified personality disorder.
In the DSM-5, personality disorders are defined as “an enduring pattern of inner experience and behavior that deviates markedly from the expectations of the individual’s culture.” This definition suggests a chronic course of long-standing, maladaptive coping behaviors with typical onset in adolescence or early adulthood that may persist into later life stages. The diagnosis of a personality disorder in an older adult requires integrating multiple sources of information such as patient information and medical history, self-reported autobiographical history, self-description, collateral from peers, family, supports or other informants, clinical observation of exhibited behaviors, and patterns of coping. Structured interviews or personality questionnaires may also aid in the diagnosis and synthesizing a formulation of personality and behavior patterns.
The diagnosis of a personality disorder can be challenging in the presence of an exacerbation of acute psychiatric symptoms, especially in the setting of limited history or collateral. For example, maladaptive coping behaviors and traits observed in an individual with a mood or anxiety disorder may be symptoms of the mental illness. In this case, further history or resolution of the behaviors or traits with treatment of the underlying mental illness may help clarify the diagnosis. Making a distinction between functional impairments due to psychological or environmental factors associated with aging and impairments resulting from personality pathology may further complicate assessment and diagnosis of a personality disorder.
Borderline personality disorder (BPD) is commonly encountered challenge in primary care and mental health settings with high comorbidity associated with other psychiatric disorders, chronic pain, fibromyalgia, and migraines. In addition, it is associated with worse outcomes in the treatment of other comorbid psychiatric illnesses. The disorder is characterized by interpersonal instability that is not only present in their personal life and support systems but may also play out in the doctor patient relationship through what is commonly referred to as transference and countertransference. Other traits include affective instability, emotional lability, impulsive behavior, identify disturbance, fear of abandonment, and
feelings of emptiness or other dissociative symptoms. The etiology is often multifactorial including genetic vulnerability, brain abnormalities, developmental arrests, and experiences early in life especially with respect to trauma, abuse, abandonment, and absence of secure attachments.
Remission rates are relatively high compared to other personality disorders; however, relapses can occur. In addition, remission of diagnostic criteria does not necessarily correlate with functional improvement. Up to 80% of patients with BPD exhibit suicidal behavior, and 60% to 70% make suicide attempts. Although this suicidal behavior does not necessarily lead to completed suicide, it remains a significant cause of death in this population. In addition, the severity or potential lethality of suicide attempts may increase or escalate with subsequent attempts or suicidal behaviors.
Manualized, structured therapies such as dialectical behavior therapy (DBT), transference-focused psychotherapy, and mentalization-based therapy (MBT) are the mainstays of treatment. While symptom-focused psychopharmacology can be a helpful adjunctive treatment, polypharmacy should be avoided when possible.
Geriatric patients with personality disorders are at greater risk of depression, suicide, cognitive impairment, and social isolation. Developing a treatment plan that addresses the symptoms, behaviors, and associated functional impairments is an essential part of recovery and wellness in this population. There are several approaches to the treatment of personality disorder, and depending on the case, a multifactorial strategy may be warranted, especially in complicated or treatment-resistant cases.
In the evaluation and diagnostic period, the first step is to identify and treat any comorbid primary psychiatric illnesses. This will help clarify the target symptoms and clarify the nature of more long-standing maladaptive behaviors in contrast to symptoms of a mental illness. Common diagnoses or comorbidities to consider in the differential diagnosis include mood or depressive disorders, anxiety disorders, late-onset schizophrenia, delusional disorder, and delirium. Medical comorbidities should also be considered and ruled out. Somatization is a common symptom encountered in personality disorders where physical symptoms and perceptions may be psychologically driven. If somatic symptoms persist even after medical conditions have been ruled out, consideration of an underlying personality disorder, somatization disorder, or other commonly comorbid condition such as fibromyalgia and their respective treatments may be warranted.
Given the chronic and pervasive nature of personality disorders, full remission may be unrealistic, but evidence-based treatments have been characterized that improve the severity of symptoms, decrease impairment, and provide insight. Research in older adults with personality disorders demonstrates that DBT and schema therapy are effective treatment options. Other possible therapies include CBT, interpersonal therapy (ITP), and problem-solving therapy (PST). The efficacy of therapy may be more limited in the setting of significant neurocognitive disorders with associated memory and executive functioning impairments.
Other effective behavioral interventions include the implementation of structure and clear boundaries. Clinicians can use aspects of the therapeutic treatment frame to set appropriate boundaries and limitations, schedule regular appointments to check in and prevent escalating crisis, and communicate with all providers to unify the treatment strategy and avoid common defenses such as splitting between members of the treatment team. Clear communication, minimizing polypharmacy, and learning how to respond to commonly encountered behaviors further enhances therapeutic rapport and positive treatment outcomes. The use of a behavior contract or treatment agreement is helpful, especially in cases problem behaviors persist despite the aforementioned measures. This is a document drafted in conjunction with the patient where boundaries, expectations, treatment goals, and consequences to nonadherence are agreed upon by the patient and the treatment team. Therapeutic power of a treatment agreement is increased if members of the patient’s support system are allowed to read and then subsequently support its content. Signing such a document adds an additional level of clarity to boundary setting and commitment to the treatment.
Some studies also suggest improved outcomes with a combination of therapy and medications, which also speaks to the high rate of comorbidities between personality disorders and other psychiatric illness or diagnoses.
When considering pharmacotherapy, strategies should consider targeting specific symptoms or the predominant presenting problems. Classes of medication to consider when targeting individual symptoms include second- generation antipsychotics, mood stabilizers, and serotonergic medications. Naltrexone and clonidine are used in the setting of self-mutilation and the associated impulsivity of this harmful, maladaptive behavior. Similar to treatment of any medical or psychiatric condition in a geriatric patient, safe and effective pharmacotherapy accounts for compliance and risk for
noncompliance, abuse potential, pharmacokinetics, pharmacodynamics, interactions with other medications, and effect on other age-specific comorbidities.
SUCCESSFUL AGING
As part of the trend toward ever increasing life expectancy and the unprecedented increase in the proportion of the population which is older, a new focus has been placed on what is now called “successful aging.” The concept of “successful aging” stands in stark contrast to the deep-rooted cultural and societal ageistic beliefs, which were discussed above and which affect our patients and how they may be perceived. The definition of “successful aging” and its determinants remains variable. The original model by Rowe and Kahn included three domains: absence of disease and disability, high cognitive and physical functioning, and active engagement with life. This model, however, has been criticized for its overemphasis on health and because it fails to account for many individuals who do not meet the Rowe and Kahn criteria for physical health and yet subjectively rate themselves as aging successfully and report a high degree of satisfaction in later life stages. Newer definitions of “successful aging” have been modified so that they apply to older individuals with and without medical and psychiatric morbidities. Qualitative studies of successful aging indicate that older adults consider the ability to adapt to circumstances and positive attitude toward the future as being more important than an absence of physical disease and disability. Investigations have revealed a paradox of aging: even as the physical health declines, self-rated successful aging and other indicators of psychosocial functioning improve in later life.
Neuroscience research during the past 15 years has demonstrated neuroplasticity of aging—that is, if there is optimal physical, cognitive, and social activity, development of new synapses, dendrites, blood vessels, and even neurons in specific regions such as dentate gyrus of the hippocampus can happen, in older animals and probably in humans. Clinical research supports a model in which positive psychological traits such as resilience, optimism, and social engagement interact with and feed into each individual’s evaluation of the degree of well-being, and are a stronger predictor of outcomes such as self-rated successful aging than physical health. Moreover, several of these traits have been shown to have a positive effect on survival that rivals or exceeds that of well-established health risk
factors such as smoking, hypertension, obesity, and sedentary lifestyle. One quality that has been reported to increase with age in some, but not all, studies is wisdom, conceptualized as a complex trait associated with advanced cognitive and emotional development, which is experience-driven. Research on wisdom has only recently gained some interest among scientists, although the concept of wisdom dates back to ancient times. Despite variability among the different published definitions of wisdom, there are several common elements, including prosocial attitudes and behaviors (eg, compassion, empathy), social decision making, insight, decisiveness, acknowledgement of uncertainty, emotional regulation, openness to new experience, spirituality, and sense of humor. What unites these components is their utility for the self (eg, greater well-being) and for others (serving common good).
Successful aging is also seen in some people with serious mental illness.
Studies have found that relative to their younger counterparts, middle-aged and older adults with schizophrenia tend to have better psychosocial functioning, including better adherence to medications and self-rated mental health, and lower prevalence of substance use and psychotic relapse.
Survivor bias is not the primary explanation for this finding. A minority of older persons with schizophrenia experience sustained remission. Reported predictors of sustained remission include psychosocial support, early initiation of treatment, better premorbid functioning, and having been married.
Mental health and primary care providers have a unique opportunity to help promote “successful aging” by teaching patients and their family members the principles of health, nutrition, and wellness and by facilitating improved quality of life by treating symptoms, improving function, and developing psychosocial interventions. Ongoing studies of “successful aging” are continuing to clarify the definition and are focusing on findings that have clinical utility. Strategies to enhance successful aging include calorie restriction, physical exercise, stopping smoking and substance use, eating the so-called super foods (eg, broccoli, cabbage, cauliflower, spinach, vitamin E, curcumin) rich in antioxidants, and ensuring appropriate health care. Equally important are cognitive and psychological strategies such as developing positive attitudes and resilience, learning new skills, engaging in stimulating activities, and optimizing stress. An important principle to
remember is that it is never too early nor too late to start on the path to successful cognitive and emotional aging.
FURTHER READING
Abrams RC, Bromberg CE. Personality disorders in the elderly: a flagging field of inquiry. Int J Geriatr Psychiatry. 2006;21:1013–1017.
American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders. 5th ed. Washington, DC: American Psychiatric Association; 2013.
APA Council on Geriatric Psychiatry. Resource Document on the Use of Antipsychotic Medications to Treat Behavioral Disturbances in Persons with Dementia. Published online March 2014. http://www.psychiatry.org/practice/professional-interests/geriatric- psychiatry/geriatric. Accessed February 22, 2022.
Blazer DG, Wu LT. The epidemiology of substance use and disorders among middle aged and elderly community adults: National Survey on Drug Use and Health (NSDUH). Am J Geriatr Psychiatry. 2009;17(3): 237–245.
Bostwick M, Rackley S. Addressing suicide in primary care settings. Curr Psychiatry Rep 2012;14:353–359.
Conwell Y, Van Orden K, Caine ED. Suicide in older adults. Psychiatr Clin North Am. 2011;34(2):451–468.
Granholm E, Holden J, Link PC, et al. Randomized controlled trial of cognitive behavioral social skills training for older consumers with schizophrenia: defeatist performance attitudes and functional outcome. Am J Geriatr Psychiatry. 2013;21:251–262.
Holt-Lunstad J, Smith TB, Baker M, Harris T, Stephenson D. Loneliness and social isolation as risk factors for mortality: a meta-analytic review.
Perspect Psychol Sci. 2015;10(2):227–237.
Hornyak M, Trenkwalder C. Restless legs syndrome and periodic limb movement disorder in the elderly. J Psychosom Res. 2004;56(5):543– 548.
Jeste DV, Savla GN, Thompson WK, et al. Association between older age and more successful aging: critical role of resilience and depression. Am J Psychiatry. 2013; 170:188–196.
Jin H, Shih PA, Golshan S, et al. Comparison of longer-term safety and effectiveness of 4 atypical antipsychotics in patients over age 40: a trial using equipoise-stratified randomization. J Clin Psychiatry. 2013;74:10– 18.
Kotwal AA, Holt-Lunstad J, Newmark RL, et al. Social isolation and loneliness among San Francisco Bay area older adults during the COVID-19 shelter-in-place orders. J Am Geriatr Soc. 2021;69(1):20– 29.
Krendl AC, Perry BL. The impact of sheltering-in-place during the COVID- 19 pandemic on older adults’ social and mental well-being. J Gerontol B Psychol Sci Soc Sci. 2021;76(2):e53–e58.
Lee EE, Depp C, Palmer BW, et al. High prevalence and adverse health effects of loneliness in community-dwelling adults across the lifespan: role of wisdom as a protective factor. Int Psychogeriatr 2019;31(10):1447–1462.
Lenze EJ, Wetherell JL. Anxiety disorders. In: Blazer DG, Steffens DC, eds. The American Psychiatric Publishing Textbook of Geriatric Psychiatry. Arlington, VA: American Psychiatric Publishing, Inc; 2009:333–345.
Mohlman J, Bryant C, Lenze EJ, et al. Improving recognition of late life anxiety disorders in Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition: observations and recommendations of the Advisory Committee to the Lifespan Disorders Work Group. Int J Geriatr Psychiatry. 2012;27(6):549–556.
National Institute on Alcohol Abuse and Alcoholism. Alcohol and Aging. http://niaaa.nih.gov. Accessed February 22, 2022.
Neikrug A, Ancoli-Israel S. Sleep disorders in the older adult—a mini- review. Gerontology. 2010;56(2):181–189.
Norman D, Loredo JS. Obstructive sleep apnea in older adults. Clin Geriatr Med. 2008;24(1):151–165.
Okolie C, Dennis M, Simon Thomas E, John A. A systematic review of interventions to prevent suicidal behaviors and reduce suicidal ideation in older people. Int Psychogeriatr. 2017;29(11):1801–1824.
Penders KA, Peeters IG, Metsemakers JF, Van Alphen SP. Personality disorders in older adults: a review of epidemiology, assessment, and treatment. Curr Psychiatry Rep. 2020;22(3):1–14.
Stanley B, Brown G. Safety Plan Treatment Manual to Reduce Suicide Risk: Veteran Version. 2008.
http://www.mentalhealth.va.gov/mentalhealth/suicide_prevention. Accessed February 22, 2022.
Uchida H, Mamo DC, Mulsant BH, et al. Increased antipsychotic sensitivity in elderly patients: evidence and mechanisms. J Clin Psychiatry.
2009;70(3):397–405.
Wenneberg AM, Canham SL, Smith MT, et al. Optimizing sleep in older adults: treating insomnia. Maturitas. 2013;76(3):247–252.
Woo BK, Daly JW, Allen EC, et al. Unrecognized medical disorders in older psychiatric inpatients in a senior behavioral health unit in a university hospital. J Geriatr Psychiatry Neurol. 2003;16(2):121–125.
Part IV
Principles of Palliative Medicine and Ethics
Chapter 67.
Chapter 68.
Chapter 69.
Chapter 70.
Chapter 71.
Chapter 72.
Palliative Care and Special Management Issues Pain Management
Management of Common Nonpain Symptoms Palliative Care Across Care Settings
Effective Communication Strategies For Patients with Serious Illness
Ethical Issues
Chapter
67
Palliative Care and Special Management Issues
Paul Tatum, Shannon Devlin, Shaida Talebreza, Jeanette S. Ross, Eric Widera
INTRODUCTION
Palliative medicine is an essential skill set for health care professionals caring for older adults. Supporting the needs and goals of older adults as they age is a complex process that often involves balancing: goals of life prolongation, functional status preservation and restoration, risk and harm mitigation, and symptom relief. Palliative care aims to ensure that these goals match the individualized care plans through interprofessional care and careful communication and goal setting.
Palliative care for older adults poses distinct challenges. For older patients with diminished reserve to respond to stressors, the potential for harm of interventions can be substantially greater than younger populations. Cognitive impairment may limit the participation of patients at times when the most difficult decisions need to be made. The interaction of multiple illnesses in multimorbidity may reduce the potential benefit of interventions, which are otherwise therapeutic in single disease state.
DEFINING PALLIATIVE CARE
Palliative care aims to improve the quality of life for both the patient with serious illness and the family. Palliative care focuses on providing patients with relief from the symptoms, pain, and stress of a serious illness. It is
appropriate for any type of diagnosis, any stage of a serious illness, and importantly, can be provided together with curative treatment.
Specialty palliative care consists of care by an interprofessional team of physicians, nurses, and social workers, chaplains and other individuals with expertise in palliative medicine, who work with patients’ other health care professionals to provide care that matches patients’ goals. In addition to palliative care specialists, the core principles of palliative care can be implemented by all health care professionals (often described as primary palliative care). The key components of primary palliative care include symptom management, coordination of care, communication about goals of care and advance care planning (ACP), caregiver support, and when needed, referral to specialist palliative care teams.
Hospice Care
Hospice care is a specialized form of palliative care for patients with limited life expectancy. Hospice care provides medical, psychosocial, and spiritual support to the patient. Hospice also supports family members coping with the complex consequences of illness, disability, and functional decline as death nears and includes bereavement support after death of the patient. Hospice care does not aim to shorten or prolong life, but rather provides comfort and support services to help people live out the time they have remaining to the fullest extent possible.
Learning Objectives
Identify the palliative needs of an aging population and recognize the potential benefits palliative care delivers in collaboration with or as part of geriatric medicine practice.
Describe the unique palliative care needs of older adults including patients with dementia, frailty, or multimorbidity, as well as diverse populations.
Key Clinical Points
1. Multiple randomized trials show palliative care improves outcomes including quality of life, satisfaction with care, reduced
Distinguish the various advance care planning (ACP) formats and tools and adapt a three-step process to prognostication in older adults as part of ACP.
family distress, and increased hospice utilization.
For older adults with multiple chronic illness, a five-step approach to palliative care includes the following:
Determine patient preferences.
Interpret the evidence for treatment with recognition of the limitations applying it to an older adult population.
Let prognosis frame clinical management decisions.
Consider treatment complexity and feasibility as part of management decisions.
Optimize therapies and care plans.
The Medicare hospice benefit employs an interdisciplinary team of professionals including doctors, nurses, aides, social workers, and spiritual care coordinators to address the patient’s physical, emotional, social, and spiritual needs. Under the Medicare hospice benefit, a patient is eligible for hospice care when the patient’s attending physician and the hospice medical director certify that she or he is terminally ill with an expected prognosis of less than 6 months “if the disease runs its usual and expected course.” The Centers for Medicare and Medicaid Services (CMS) has guidelines entitled local coverage determinations (LCDs) to help physicians determine a prognosis of 6 months or less. These LCDs are available on the CMS website and are guidelines to help determine prognosis, they are not rules or requirements for hospice admission. Once a patient is admitted, the hospice agency is required to provide services that are reasonable and necessary for the palliation and management of a patient’s terminal illness and related conditions, including doctor and nursing visits, hospice aide and counseling services, medications, medical equipment and supplies, and short-term respite and inpatient stays to manage pain and symptoms.
PALLIATIVE CARE NEEDS OF AN AGING POPULATION
Health care in the United States has made tremendous advances which have allowed for an aging population. The typical death has been transformed from the nineteenth-century picture of an acute illness over a short time from infectious disease into a process of decline over years with increasing
morbidity and functional dependency from chronic disease. The average age at the time of death in the United States per 2019 data is now 79 (81 years for females and 73 for males) and survivors to age 65 live on average another 20 years (Figure 67-1).
FIGURE 67-1. Life expectancy at birth and age 65, by sex: United States, 2018 and 2019. (Data from National Center for Health Statistics, National Vital Statistics System, Mortality.)
While chronic illness of older adults is increasingly complicated by multiple illnesses, in 2018 two major diseases, heart disease and cancer, still accounted for nearly half of all 2018 deaths. Apart from acute respiratory illness, accidents, and suicide, the majority of deaths of older adults are associated with a period of chronic disease. In 2018, two-thirds of people aged 65 and older had multiple chronic conditions per the Centers for Disease Control and Prevention (CDC).
The exponential increase in COVID-19 illness in the United States has led to a significant number of cases and mortality particularly in older adults in 2020 to 2021, becoming deadlier than heart disease and cancer. COVID- 19’s impact on older adults perfectly demonstrates the unique needs of the older adult population. Geriatric palliative care during the COVID-19 pandemic and future pandemics can work within systems to balance
potentially burdensome interventions with complex goals and to deal with the common sequela of decline in performance status after infectious illness.
As the patient’s number of comorbid conditions rises, the complexities of delivering care have had profound consequences. Patients with multimorbidity are at increased risk of disability and institutionalization, poorer quality of life, and higher risk of harm from medical treatments. Over time, the care that these patients get often becomes more fragmented. Among Medicare fee-for-service decedents in 2015, deaths in the home or community setting occurred in 40% and a change in the setting of care in the last 3 days of life occurred among 10% of decedents.
THE BENEFIT FOR PALLIATIVE MEDICINE
Palliative medicine is now well established as a discipline, and as the field matures, there is a growing body of literature showing its benefit.
The Evidence for Palliative Care Systems
The evidence for specialist palliative care interventions includes multiple randomized trials showing palliative care improves outcomes including quality of life, satisfaction with care, reduced family distress, and increased hospice utilization. The landmark Temel study demonstrated the value of concurrent care, that is, palliative care delivered early and simultaneously with curative, disease-focused care in advanced lung cancer patients. Early palliative care for newly diagnosed metastatic lung cell cancer patients was associated with higher quality of life, less depression, and a statistically significant longer median survival (12 months vs 8.9 months). Subsequent, larger cluster randomized trial of advance cancer patients with good performance status representing a wide variety of cancer types demonstrated improved quality of life with early palliative care. Notably, the statistical significance in many outcomes was not achieved until the fourth month of palliative care, emphasizing the importance of early referral to palliative care to impact quality of life measures beyond pain control. The ENABLE III study showed that initiating concurrent palliative care at the time of a cancer diagnosis had a significant impact on 1-year survival compared to initiating palliative care 3 months after initial cancer diagnosis (63% early vs 48% delayed) and involved multiple types of cancer not only lung cancer.
Extensive evidence is accumulating for palliative care delivery for noncancer diagnosis as well. A 2020 meta-analysis of 28 trials of palliative
care in noncancer patients (13,664 patients, mean age 74) including patients with heart failure (10 studies), dementia (4 studies), chronic obstructive lung disease (3 studies), and multimorbidity (11 studies) showed that palliative care is associated with less emergency department use, less hospitalization, and lower symptom burden. Trials to date of palliative care have been least promising in dementia and highlight opportunities to better tailor palliative care to the special needs of persons living with dementia.
The Evidence for Hospice
Hospice has been shown in numerous studies to improve symptom control, increase patient satisfaction and quality of life, and help patients and families prepare for death. The highest-quality outcomes tend to occur for patients who were enrolled in hospice care for longer than 30 days. Since 2014 the CMS has collected quality data via the Hospice Item Set, which has seven key domains of patient quality.
A major benefit of hospice over usual care is that hospice allows patients to die in their own place of residence when that is the patient’s wishes.
Hospice care adds value by delivering high-quality care while reducing Medicare costs primarily through caring for patients in the home setting. Cost savings increase with greater time spent in hospice care. The maximum reduction in expenditures tends to occur when the hospice length of stay exceeded 50 days.
A specific concern of geriatricians and policy makers is whether hospice adds benefits for patients with dementia in long-term care. Nursing home residents with dementia and cancer who received hospice care in the last 30 days of life were less likely to die in a hospital than those who did not receive hospice, and hospice enrollment increases the likelihood that patients with dementia who have pain receive an opioid.
PALLIATIVE CARE IN SPECIAL POPULATIONS
Palliative care and geriatrics are both designed to serve the needs of the most vulnerable patients with a “common central philosophy of patient-centered commitment to holistic and humane care.” Due to the growing geriatric population, there is an unprecedented need to provide excellent palliative care to older adults, especially those with dementia, frailty, and multimorbidity. Partnerships between geriatricians and palliative medicine specialists can be especially useful for management of patients with these
conditions, with each specialty working together to ensure the highest level of evidenced-based compassionate care to older adults.
Dementia
Older adults with dementia have unique palliative medicine needs given the progressive nature of their illness with significant, prolonged disability and predictable increases in symptom burden as dementia advances. This population especially benefits from combined geriatric and palliative medicine approaches.
It is imperative that health care professionals provide appropriate management of behavioral and psychological symptoms of dementia. As discussed in detail in the management of agitation in Chapter 60, management should involve (1) identifying the specific problem behavior and its severity,
(2) identifying potential triggers for the behavior, (3) removing underlying physiologic and environmental triggers whenever possible, (4) attempting nonpharmacologic interventions to improve behavior, and (5) as a final resort, using recommended targeted pharmacotherapy for behaviors that do not respond to attempts of nonpharmacologic management in situations that risk patient or caregiver safety. As part of the American Board of Internal Medicine (ABIM) “Choosing Wisely” initiative, the American Geriatrics Society (AGS) recommends against the use of antipsychotics as first-line treatment for behavioral and psychological symptoms of dementia. Notably, a cluster randomized trial of nursing home patients with behavioral problems using an empiric stepwise protocol to treat pain resulted in a decrease in the number of symptoms. Most patients were treated with the first step of the protocol, acetaminophen. Opioids may be tried for nonresponders. The Pain Assessment in Advanced Dementia (PAINAD) Scale may be a useful tool in this situation.
In advanced stages of dementia, weight loss and dysphagia are common with the latter putting patients at risk for aspiration pneumonia. Weight loss may be secondary to decreased appetite, depression, oral problems such as ulcers or ill-fitting dentures, increased wandering, or medication side effects. A reversible etiology for weight loss is not always present. The addition of nutritional supplements or appetite stimulants has not been shown to improve clinically important outcomes. For patients with dysphagia, several organizations, in the ABIM “Choosing Wisely” initiative, recommend against percutaneous feeding tubes and recommend in favor of offering
oral assisted feeding. Studies have shown that feeding tubes in advanced dementia do not increase survival or quality of life, prevent aspiration pneumonia, or improve pressure ulcer healing or nutritional parameters (weight, albumin). In fact, feeding tube usage has been associated with the development of pressure ulcers and the use of pharmacologic and physical restraints. The majority of feeding tubes in patients with advanced dementia are inserted during an acute hospitalization.
Infections, specifically urinary tract infections and pneumonia, are frequently diagnosed in advanced dementia and commonly occur as patients near end of life. In nursing home residents with advanced dementia, observational data show suspected urinary tract infections rarely meet criteria for antibiotics; however, the majority are treated with them. Findings from a prospective cohort study suggest that the treatment of suspected urinary tract infections in this population does not prolong survival.
Antibiotic therapy for aspiration pneumonia in advanced dementia has been associated with improved survival but not improved comfort. For aspiration pneumonia, the most aggressive treatment approaches with intravenous therapy and hospitalization have been associated with the greatest discomfort, and the survival benefit associated with antibiotic therapy was similar regardless of administration route. Interventions such as opioids, fans, and acetaminophen can increase comfort. In addition to using the minimal clinical criteria for the initiation of antibiotics, appropriate management of infections in advanced dementia requires an attention to a patient’s primary goals (ie, life prolongation, comfort, safety) in order to avoid unnecessary or unwanted treatment burden.
Health care professionals caring for older adults with dementia should engage patients and their families in ACP early and include an assessment of understanding of diagnosis, prognosis, and disease trajectory. The hope is to align treatment preferences with patient values and decrease stress on future caregivers. ACP discussions may result in completion of an advance directive document in which the patient-specifies preferences for future care and/or designates an authorized surrogate decision maker to communicate their preferences in circumstances when they no longer possess medical decision-making capacity (see more details in Chapters 7 and 10). Dementia- specific advance directive documents are now available given dementia’s distinct clinical course. While barriers to these conversations exist such as difficulty with prognostication, lack of decision-making capacity, and patient
or family reluctance to engage, both geriatricians and palliative medicine specialists are well equipped to conduct these vital discussions. Online resources are available for assistance (Table 67-1).
TABLE 67-1 ■ RESOURCES AVAILABLE FOR DEMENTIA- SPECIFIC ADVANCE CARE PLANNING
ACP discussions with surrogate decision makers should continue as dementia progresses. Older adults with advanced dementia living in nursing homes who have proxies with an understanding of their terminal prognosis are less likely to receive burdensome interventions. Consideration for hospice care in end-stage dementia is also important as enrollment is associated with greater satisfaction in patient care. Determining a 6-month prognosis in advanced dementia can be challenging. Factors such as irreversible weight loss, low body mass index, functional dependence, and advanced age are associated with limited prognosis. Palliative medicine specialists can also help with prognostication.
Frailty
While the exact definition of frailty remains a subject of debate (see Chapter 42), frail patients are particularly vulnerable to adverse outcomes and have significant palliative care needs. For instance, severely frail older adults have shown poor response to treatment and may not benefit from intensive rehabilitation efforts. Attempting rehabilitation of the most frail may in fact
cause harm. The key to managing the palliative needs of frailty is determining goals and recognizing the patient with poor potential to improve. When in doubt, a therapeutic trial of improving performance status is always appropriate, but if not achieved should prompt a repeat discussion of goals and prognosis.
Determining the severity of frailty and the potential for reversibility can help clinicians recommend appropriate treatment. Frailty definitions, staging, and recommended treatment options for each stage are described in Table
67-2.
TABLE 67-2 ■ A CLINICAL MANAGEMENT APPROACH TO FRAILTY
Patients with frailty and multimorbidity may still be eligible for hospice even if they do not have a single terminal illness. While CMS has stated that frailty, debility, or adult failure to thrive are not accepted as the principal hospice diagnosis reported on the Medicare hospice claims form, this does not mean that frail patients are not eligible for the Medicare hospice benefit. A patient is considered hospice eligible if the attending physician and
hospice medical director have certified the patient to be terminally ill with a prognosis of less than 6 months. This prognosis is determined based on the severity and irreversibility of the patient’s frailty as well as their multimorbidity. The hospice Medical Director could utilize the hospice LCDs available on the CMS website to support their determination that the patient has a prognosis of less than 6 months.
Multimorbidity
Providing optimal care for older adults with multiple chronic conditions, or multimorbidity, presents one of the greatest challenges in geriatrics. Patients with multimorbidity have increased use of health care resources, poorer quality of life, higher rates of institutionalization, disability, and death. To address the challenge of providing patient-centered care to older adults with multimorbidity, a stepwise approach can integrate patient care with the key principles of geriatric and palliative medicine in five domains: (1) patient preferences, (2) interpreting the evidence, (3) prognosis, (4) clinical feasibility, and (5) optimizing therapies and care plans. The American Geriatrics Society’s “Patient-Centered Care for Older Adults with Multiple Chronic Conditions: A Stepwise Approach: Expert Panel on the Care of Older Adults with Multimorbidity” is an excellent online resource: https://geriatricscareonline.org/ProductAbstract/Framework-for-Decision- making-for-Older-Adults/CL026[geriatricscareonline.org].
Considerations for Providing Inclusive Care
The US population is ethnically diverse and there may be differences on perceptions and readiness for palliative care based on race and socioeconomic factors. Ornstein’s 2020 analysis of racial disparities in the use of hospice and end of life treatments found that Black individuals were significantly less likely to use hospice and more likely to have multiple emergency department visits and hospitalizations and undergo intensive treatment in the last 6 months of life compared with White individuals regardless of cause of death. Barzargan examined the awareness of palliative, hospice care and advance directives in a large sample of an ethnically diverse population. Hispanic and non-Hispanic Black participants are far less likely to report that they have heard about palliative and hospice care and advance directives than their non-Hispanic White counterparts. In this study, 75%, 74%, and 49% of Hispanics, non-Hispanic Blacks, and non-
Hispanic White participants, respectively, claimed that they had never heard about palliative care.
Patients who are unable to speak the language of the care team are at risk of receiving care that may not match their wishes. Use of family members as interpreters risks information being withheld or incomplete translation. A trained medical interpreter at bedside or a telephone translator service is recommended.
The National Academy of Medicine (NAM) report on health care disparities and unequal treatment determined that patients’ attitudes toward health care and treatment preferences were not the major source of disparities. At the level of clinical encounter, factors that lead to disparities include bias or prejudice, stereotypes (beliefs held by the provider about the behavior or health of minorities), and uncertainty about preferences. The key systems factors in disparities per the NAM study were factors such as cultural and linguistic barriers, fragmentation of health care systems, and the types of incentives in place.
While all persons with serious illness endure difficulties, those patients who are lesbian, gay, bisexual, and transgender/transsexual (LGBT) may face unique issues distinct from the heterosexual population. LGBT older adults have a history of having experienced stigmatization, discrimination, victimization, or violence during their lifespan by their families, school, workplaces, and even health care providers. LGBT persons may present at more advanced stages of illness because they are more reluctant to seek health care, or they lack health insurance. LGBT persons may need to rely on close friends rather than on family for caregiving. Nonfamily caregiving may be due to estrangement from the person’s biological family or because LGBT persons are less likely to have children. Informal communities of support are also known as “lavender families” or “families of choice” and should be respected and included as family as identified by patients. The support from the “lavender family” may become essential in caring for an LGBT individual at the end of life. It may be challenging for LGBT persons who need custodial care to find a long-term care facility that is “LGBT friendly.” LGBT patients may hide their sexual identities to be able to access a long- term care facility also known as “going back in the closet.”
Transgender patients may face several unique stressors. Having a gender- variant body and requiring assistance for basic needs of daily living like
bathing, dressing, or feeding puts the transgender individual in a vulnerable position and at risk of being physically abused.
It is of extreme importance that the LGBT person takes steps to legally document their preferences for health care and other legal matters.
Individuals that may be closer to the LGBT person may not be identified as the default surrogate by law should the LGBT person no be able to make his or her own decision; and it is possible that the default decision maker may not be respectful of the individual’s chosen gender identity.
LGBT patients should designate a Health Care Power of Attorney (HCPOA) and additionally explicitly give that person the power to direct health care professionals with the preferences of name, pronoun of choice, and appearance consistent with their gender identity. Given that the HCPOA and advance directives are no longer effective as soon as the person dies, it is important for LGBT persons to also put in place a Disposition of Bodily Human Remains (DBHR) document so that he or she can name who is the person to have authority over the remains of a person once deceased. For example, a same-sex partner who had HCPOA would not be able to claim the deceased body of his or her loved one unless he or she had been designated to do so in a DBHR document.
LGBT persons also may face challenges during the bereavement process, particularly if in the case of a same-sex relationship, the couple had not been openly acknowledged. The surviving individual may experience “silent mourning” and not have access to traditional grief support. Table 67-3 offers open-ended inclusive questions to use to better understand patients’ needs and to interact with them in a way that is aligned with their values and identity.
TABLE 67-3 ■ GETTING TO KNOW THE PATIENT IN AN INCLUSIVE WAY
PROGNOSTICATION IN OLDER ADULTS
Prognostication is a vital aspect of decision making as it provides patients and families with information to determine realistic and achievable goals of care, is used in determining eligibility for benefits such as hospice, and helps in targeting interventions to those likely to live long enough to benefit from a proposed intervention. For example, it may take many years for the benefits to accrue for preventative interventions such as cancer screening and tight glycemic control. For frail older adults whose life expectancy is less than this time horizon to benefit, they are exposed to potential immediate harms of these preventative interventions without the possibility of receiving the benefits.
Prognostication can be broken into three general steps: (1) estimating the probability of an individual developing a particular outcome over a specific period of time, (2) the act of communicating the prognosis with the patient and/or family, and (3) the interpretation of the prognosis by the patient and/or family.
Step 1: Estimating Prognosis
Estimating prognosis in geriatric populations is more complicated than in younger populations, as older adults are more likely to have more than one chronic progressive illness that impacts life expectancy. Estimation of prognosis in multimorbid older adults requires clinicians to account for interaction of their medical problems, clinical and laboratory findings, and functional and cognitive status. While prognostication based on clinician judgment is correlated with actual survival, it is subject to numerous biases, most importantly that clinicians tend to overestimate patient survival by a factor of 3 to 5. Prognostic indices that incorporate age and clinical characteristics such as multimorbidity and functional status may improve the accuracy of prognostication compared to clinician judgment alone. A helpful repository of published geriatric prognostic indices can be found at www.ePrognosis.org.
Step 2: Communicating Prognosis
Numerous studies have shown that most individuals and their families want prognostic information, yet they are often not given the opportunity to discuss it with health care providers. This is despite evidence that effective communication around end-of-life issues can provide benefits to patients and families, without worsening of anxiety, hopelessness, or depression.
When delivering prognosis, it is first helpful to ask about the patient’s understanding of their illness and perception of what the future may have in store for them. It is also helpful to assess the readiness of the patient and/or family member to have a discussion of prognosis by asking them permission to talk about it. Based on this information, the clinician should deliver prognostic information tailored to the patient’s current desire and level of understanding. When delivering prognostic estimates, clinicians should acknowledge the inherent uncertainty in most prognostic estimates. One way to do so is by giving ranges in prognostic estimates (ie, days to weeks, weeks to months, months to years to live) or by giving the best case, worst case, and
most common scenarios. In addition, delivering information on prognosis is likely to bring emotional reactions in patients and their family members. Use of empathic statements, including naming the emotion, and the use of therapeutic silence can be helpful in supporting patients and their surrogate decision makers.
Step 3: Interpreting Prognosis
Patients’ and surrogates’ personal estimates of prognosis are often different, and generally more optimistic than what is communicated to them by health care providers. Furthermore, few surrogates solely rely on prognostication information delivered by physicians to determine their own prognostic estimates. Rather, their interpretation of prognosis is influenced by many factors, including perceptions of the patient’s strength, will to live, unique history, individual observations of physical appearance, and the surrogate’s presence, optimism, intuition, and faith. Given this information, it is helpful when delivering prognosis to include the factors that influence the clinician’s prognostic estimate, as it may also influence the patient’s or surrogate’s interpretation of prognosis. It is also valuable to enquire how the clinician’s prognostic estimate influenced how the patient or surrogate is currently thinking about the prognosis.
ADVANCE CARE PLANNING AND ADVANCE DIRECTIVES
ACP is a process that supports adults at any age or stage of health in understanding and sharing their personal values, life goals, and preferences regarding future medical care. ACP should be ongoing as the values or priorities of people may change as they go through life and should include continued support and preparation for future medical decision making. The focus of ACP is on meaningful conversations in order to create a care plan that aligns recommended treatments with patient preferences. There is evidence that ACP increases patient and surrogate satisfaction with both communication and medical care and decreases surrogate distress. ACP does not increase anxiety or lead to loss of hope. To effectively deliver ACP, health disparities, systemic racism, and cultural differences need to be addressed. These conversations are appropriate at any time in an older adult’s life, but clinicians should especially consider initiating goals of care
discussions if they answer “no” to the question “Would you be surprised if this patient dies in the next year?” Documentation of these discussions may include advance directives or physician orders further described in Table 67-4. Advance directives are written documents that state a patient’s
preferences for addressing future medical decisions. Advance directives can be executed at any time by a competent adult person and go into effect when the person is no longer capable of making health care decisions.
TABLE 67-4 ■ ADVANCE DIRECTIVES AND PHYSICIAN ORDERS
ADVANCE DIRECTIVES (FUTURE CARE)
Durable Power of Atto:rתey for Health Care (DיPO:A-HC) or Health 1Car,e Proxy
. Allow a pa:tient de ignate a person to נnake b alth ca1·e decisto,ns Oינ.ו b lוalf o tlוie p,ati:eni hou1d they lo e ,d c.i ioנ1- 1naki1ןg capaci.y
Does not specify wi he .for 1nedi,cal.c.are
Ideal proxy is sorn,eorוe who the pati,e11t kז1ows wel, is a Gסm,petent adult,) and ·is willi1,g/able to .assist in lוea.lth care decisio,n mallci.rו,g respecti11g th,e prefereז1ces of t' l1e patient should tl1e need·a:ri e
tl1ei1· fa·mily or if tlוe ,patient want.s to 11ame a frie11,d wh,o
wo,uld no·t be in tl'-ie l1iera:rcl1y of t1זro�t•e de·cisioנ1 ו11ake1·s
Particularly impo1·tan wheגו ttגe pati,-n_t • estrang d from
Liviז1.g Will
]. A:[I.ows a patien·t to spedfy in writi11g wis.�ו.es fo.1·ft1ture 111.ed.ical caן·e in the hypotl1etical situation wl1eב·e tl1ey lose decisi!o,11-m-דוaking capacity
2. Difficult to cover all fllh11ו,e pיossib1liti,e,s naki1וg·th·e health caFe p1·oxy clוoice mגportant
PHYSICIAN ORDERS (CU1RRENTCARI:)
Physician O.rder for Life-Sustaining Treatment (POlST),
Sta·rדding medical order spיe,cifyin.g ,decisio11s regard·i:[וg cu1·rent ז11edical ca:re for pat1em1ts with prog1·essive ch1·on·ic illnesses
Trnnsla: -s pat.i nיprefi 1·ence, o, 1·esu citatiVי efforts,) artifi ial nutrition, and o יh r li� -sustaining treatנnen m asu1·e·into an f{! cti tate-authoriz ,d medical ord
Ord acconוp,anile pati 11t in d.iff, r nt ,ti11gs (home, hospital) n11r ing ho·me)
Priorities in ACP discussions vary depending on the setting. In a relatively healthy person, the focus may be on making goals of care choices based on hypothetical situations or choosing a health care proxy. If a health care proxy is not chosen, the surrogate decision maker authorized to make health care decisions for an incapacitated patient is based on a hierarchy of closeness to the patient (eg, spouse, adult children, parents, etc), which may vary based on the state. In someone with a predictable progressive condition who is likely to lose functional or cognitive abilities, such as ALS or dementia, the conversation should include information on treatment options with significant long-term impact, such as tracheostomy or hemodialysis, as well as prognosis. In a seriously ill person, the choices are less hypothetical and an actual decision regarding treatment may be needed. In this circumstance, it is important to inform patients (or their proxies) of the risks, benefits, and alternative treatments available. Consideration should also be given to deactivating devices such as implantable cardioverter-defibrillators in patients close to death.
ACP is complex, and communication techniques differ based on the clinical situation. Chapter 71 expands on useful communication strategies. The Gerontological Society of America also has resources available to assist clinicians with communication with patients with cognitive and sensory impairments. A variety of resources to improve communication effectiveness in ACP are available online and include vitaltalk.org and capc.org. ICARE, a communication model used within the Veterans Health Administration, is provided as an example of discussing “Do not resuscitate orders” in Table 67-5. When having conversations about cardiopulmonary resuscitation (CPR), it is important to discuss the likelihood of this intervention meeting a patient’s goals. A recommendation for or against CPR may differ, for example, for a patient who prioritizes maintaining independence and one who prioritizes living as long as possible. In adults older than 65 years, 17% of patients who undergo in hospital CPR survive to hospital discharge and 7% are discharged home. The presence of serious conditions such as metastatic cancer, sepsis, or multiple organ failure are associated with worse CPR outcomes.
TABLE 67-5 ■ ICARE GOALS OF CARE FOR CODE STATUS
In 2020 as the COVID-19 pandemic spread throughout the world, ACP required immediate attention and innovation. For older adults at high risk for serious disease, readdressing care plans and eliciting preferences specifically pertinent to COVID-19, such as hospitalization, intubation, and CPR, were urgently needed. Telehealth increased and remains a powerful tool for patient outreach. When using telemedicine for ACP, it is important to create a welcoming environment with the electronic device situated at eye level with adequate sound to promote patient engagement, to invite patients to include those they would want involved in the discussion, to provide an overview of a telehealth visit, and to monitor for nonverbal and environmental cues. Sending an electronic summary of the visit can also be helpful.
Addressing Spiritual Needs
Comprehensive palliative care attends to the whole needs of the patient. The founder of modern palliative care, Dr. Cicely Saunders, coined the term Total Pain, meaning that palliative care attended to the pain that was physical, psychological, social, and spiritual. An international consensus project defines spirituality as “a dynamic and intrinsic aspect of humanity through which persons seek ultimate meaning, purpose, and transcendence, and experience relationship to self, family, others, community, society,
nature, and the significant or sacred. Spirituality is expressed through beliefs, values, traditions, and practices.”
While patients with serious illness wish to have spiritual needs addressed as part of their care, most physicians report discomfort doing so. Screening for spiritual needs allows for referral for support, which can be in the form of pastoral support of a health care team or within the patient’s own spiritual support network. A screening tool that has been well validated is FICA Spiritual History Tool. FICA asks questions about Faith, Importance, Community, and how to Address these issues. Additional questions for communication within the FICA framework can be found at https://smhs.gwu.edu/spirituality-health/.
CONCLUSION
An essential component of geriatric medicine is the routine delivery of effective primary palliative care that includes attention to symptom management, prognostication, communication, ACP, and individualized care plans. Appropriate referrals to specialty-level palliative care should also be considered when palliative needs are great, given the increasingly robust evidence base of benefit for both palliative care and hospice.
FURTHER READING
Acquaviva, KD. LGBTQ-Inclusive Hospice and Palliative Care: A
Practical Guide to Transforming Professional Practice. New York, NY: Harrington Park Press; 2017.
American Geriatrics Society Expert Panel on the Care of Older Adults with Multimorbidity. Patient-centered care for older adults with multiple chronic conditions: a stepwise approach from the American Geriatrics Society: American Geriatrics Society Expert Panel on the Care of Older Adults with Multimorbidity. J Am Geriatr Soc. 2012;60:1957–1968.
geriatrics-society/[choosingwisely.org]. Accessed February 18, 2022.
Bakitas M, Tosteson T, Lyons K, et al. Early versus delayed initiation of concurrent palliative oncology care: patient outcomes in the ENABLE III
randomized controlled trial. J Clin Oncol. 2015;33:1438–1445.
Bernacki RE, Block SD. American College of Physicians High Value Care Task Force. Communication about serious illness care goals: a review and synthesis of best practices. JAMA Intern Med. 2014;174:1994–2003.
Fried LP, Tangen CM, Walston J, et al. Frailty in older adults: evidence for a phenotype. J Gerontol A Biol Sci Med Sci. 2001;56:M146–M156.
Gerontological Society of America (GSA). Communicating With Older Adults: An Evidence-Based Review of What Really Works. Washington, DC: Gerontological Society of America. 2012.
Hanson LC, Ersek M, Gilliam R, Carey TS. Oral feeding options for people with dementia: a systematic review. J Am Geriatr Soc. 2011;59:463– 472.
Husebo BS, Ballard C, Sandvik R, Nilsen OB, Aarsland D. Efficacy of treating pain to reduce behavioural disturbances in residents of nursing homes with dementia: cluster randomised clinical trial. BMJ.
2011;343:d4065.
IOM (Institute of Medicine). Dying in America: Improving Quality and Honoring Individual Preferences Near the End of Life. Washington, DC: The National Academies Press; 2014. https://www.nap.edu/catalog/18748/dying-in-america-improving-
quality-and-honoring-individual-preferences-near[nap.edu]. Accessed February 18, 2022.
IOM (Institute of Medicine). Unequal Treatment: Confronting Racial and Ethnic Disparities in Health Care, 2002. http://nationalacademies.org/hmd/reports/2002/unequal-treatment- confronting-racial-and-ethnic-disparities-in-health-care.aspx. Accessed May 10, 2016.
IOM (Institute of Medicine). (US) Committee on Lesbian, Gay, Bisexual, and Transgender Health Issues and Research Gaps and Opportunities. The Health of Lesbian, Gay, Bisexual, and Transgender People: Building a Foundation for Better Understanding. Washington, DC: The National Academies Press; 2011. https://nationalacademies.org/hmd/reports/2011/the-health-of-lesbian- gay-bisexual-and-transgender-people.aspx. Accessed May 10, 2016.
Johnson KS. Racial and ethnic disparities in palliative care. J Palliat Med.
2013;16:1329–1334.
Mitchell SL, Teno JM, Kiely DK, et al. The clinical course of advanced dementia. N Engl J Med. 2009;361: 1529–1538.
National Hospice and Palliative Care Organization. NHPCOs Facts and Figures. Hospice Care in America. http://www.nhpco.org/hospice- statistics-research-press-room/facts-hospice-and-palliative-care.
Accessed May 10, 2021.
Quinn KL, Shurrab M, Gitau K, et al. Association of receipt of palliative care interventions with health care use, quality of life, and symptom burden among adults with chronic noncancer illness: a systematic review and meta-analysis. JAMA. 2020;324:1439–1450.
Sampson EL, Candy B, Jones L. Enteral tube feeding for older people with advanced dementia. Cochrane Database Syst Rev. 2009;2:CD007209.
Sidebottom AC, Jorgenson A, Richards H, Kirven J, Sillah A. Inpatient palliative care for patients with acute heart failure: outcomes from a randomized trial. J Palliat Med. 2015;18:134–142.
Singer AE, Meeker D, Teno JM, Lynn J, Lunney JR, Lorenz KA. Symptom trends in the last year of life from 1998 to 2010: a cohort study. Ann Intern Med. 2015;162:175–183.
Temel JS, Greer JA, Muzikansky A, et al. Early palliative care for patients with metastatic non-small-cell lung cancer. N Engl J Med.
2010;363:733–742.
Teno JM, Gozalo PL, Mor V, et al. Site of death, place of care, and health care transitions among US Medicare beneficiaries, 2000-2015. JAMA. 2018;320:264–271.
Zimmermann C, Swami N, Krzyzanowska M, et al. Early palliative care for patients with advanced cancer: a cluster-randomised controlled trial.
Lancet. 2014;383: 1721–1730.
Chapter
68
Pain Management
Roxanne Bavarian, Amber K. Brooks
OVERVIEW
Pain is one of the most common reasons adults seek medical care. Pain prevalence increases with age and is associated with wide-ranging adverse health outcomes including increased risk of falls, increased functional disability, declining mobility, cognitive decline, and decreased quality of life. The most common pain conditions in older adults include osteoarthritis, chronic neuropathic pain (eg, diabetes, herpes zoster), vertebral compression fractures (VCFs), pain associated with cancer and its treatments, and pain associated with other chronic illnesses. The prevalence of chronic (persistent) pain among older adults is estimated at 25% to 75% and higher in residential care settings. Chronic pain tends to be more complex in older adults with 60% to 70% of them describing multisite pain and more than 60% describing multiple types of pains. Diagnosing and treating chronic pain in older adults is further complicated by age-related changes in pathophysiology, such as cognitive impairment and organ system dysfunction; coexisting chronic medical conditions; and complex treatment considerations, such as polypharmacy and increased susceptibility to side effects, among other factors (Table 68-1).
TABLE 68-1 ■ COMPLEXITIES OF PAIN AND AGING
CLASSIFICATION OF PAIN
Pain is defined as an unpleasant sensory and emotional experience associated with actual or potential tissue damage.
Acute Versus Chronic Pain
Acute pain is defined by its sudden onset, association with a noxious stimuli, and short duration of tissue healing (approximately ≤ 3 months). Examples of acute pain include trauma and postoperative pain. Poorly controlled acute postoperative pain is a particularly vulnerable period for older adults and is associated with increased stress response (ie, increased heart rate, blood pressure, and respiratory rate), limited mobility, prolonged hospital stays, and increased risk of developing chronic pain. Conversely, acute pain that is appropriately managed during the perioperative period may help prevent the development of chronic pain.
Learning Objectives
Classify different types of pain.
Describe pathophysiological findings in chronic pain.
Review key aspects of pain evaluation and management.
Key Clinical Points
Pain in older adults is often underdiagnosed and undertreated.
Careful evaluation of pain requires a thorough history, physical examination, and use of validated pain measures.
Comprehensive pain management in older adults requires a patient-centered, multidimensional pain management strategy that recognizes the biological and psychosocial complexities and utilizes a combination of medication and nonmedication therapies.
Chronic Pain
Chronic (persistent) pain is defined as pain that persists or recurs for more than 3 months. In 2019, the International Association for the Study of Pain (IASP) updated its chronic pain classifications for the International Classifications of Diseases (ICD-11) (Table 68-2). Pain that is deemed a disease in its own right, such as fibromyalgia or nonspecific low back pain, is called “chronic primary pain.” In the other six subgroups, pain is secondary to an underlying disease: chronic secondary musculoskeletal pain, chronic neuropathic pain, chronic secondary headache or orofacial pain, chronic secondary visceral pain, chronic cancer-related pain, and chronic postsurgical or posttraumatic pain.
TABLE 68-2 ■ UPDATED CHRONIC PAIN CLASSIFICATIONS
High-impact chronic pain is defined as chronic pain that limits life or work activities on most days or every day during the past 6 months. In 2016, an estimated 8% of US adults (20 million) had high-impact pain, with higher prevalence among older adults (11% for 65 to 84-year-old adults and 16% for ≥85-year-old adults).
Pathophysiology of Chronic Pain
In a normal response to tissue injury with a noxious stimulus, the body will respond by activating immune cells, such as macrophages, leukocytes, and mast cells, that release proinflammatory mediators. These proinflammatory mediators include bradykinin, histamine, tumor necrosis factor, interleukin- 1β, and interleukin-6, all of which promote the release of substance P and calcitonin gene-related peptide from nerve endings to ultimately activate
spinal pathways that cause pain. These pain signals occur in response to the initial inflammation to help “teach” a person to avoid further injuring the damaged tissue during the healing process. In acute pain, healing occurs over the ensuing days to weeks, leading to resolution of inflammation and pain signals in the body. Contrarily, in patients with chronic pain, the nervous system will continue to send signals for pain even after the initial injury has subsided. The pathophysiology of chronic pain revolves around this concept of central plasticity of the brain (“neuroplasticity”) in which neural connections are rewired and sensitivity to stimuli changes in response to an initial injury.
Neurological changes associated with chronic pain include hyperalgesia, allodynia, and the spread of pain. Hyperalgesia is defined as an increased sensitivity to painful stimuli, such as an exaggerated pain response to a gentle pinprick, scratch, or heat. Hyperalgesia occurs when the threshold of local nociceptors in the tissue is lowered, which increases the excitability of nociceptor neurons and pain pathways. Similarly, the receptive field of activation of these nociceptors can spread, leading to the spread of pain to adjacent, noninjured areas. Hyperalgesia and spread of pain is caused by a process known as peripheral sensitization, in which inflammation causes release of chemical mediators like histamine and bradykinin that influence the threshold and field of activation of nociceptors in the peripheral tissues. Central sensitization refers to the maladaptive plasticity within the central nervous system. This maladaptivity leads to an increase in the synaptic strength of pain pathways, with a reduction in pain inhibitory pathways, which ultimately lead to increased pain processing in the brain.
Subsequently, allodynia may ensue: nonpainful touch stimuli activate mechanoreceptor neurons that have been rewired to stimulate pain pathways. Patients may also report their pain being triggered by a cool breeze against the painful area or the weight of their clothing or bedsheets brushing against the area.
Although the pathophysiological concepts behind chronic pain, such as hyperalgesia, allodynia, and spread of pain, have been verified in animal models of chronic pain, understanding why some patients develop chronic pain is a subject that warrants further research.
Nociceptive Versus Neuropathic Pain
For treatment purposes, it is important to distinguish the underlying mechanism of pain in order to best tailor the treatment plan, particularly in regard to medications so as to minimize the risk of polypharmacy. Pain that results from the stimulation of pain receptors is called nociceptive pain.
Nociceptive pain may arise from tissue injury, inflammation, or mechanical malformation. Examples include trauma, burns, infection, arthritis, ischemia, and tissue distortion. Pain from nociception usually responds well to analgesic medications.
Neuropathic pain, on the other hand, is caused by a lesion or disease of the somatosensory system, including peripheral nerves and central neurons, and affects 7% to 10% of the population. Neuropathic pain conditions commonly seen in older adults include diabetic peripheral neuropathy, postherpetic neuralgia, and posttraumatic neuralgia (postamputation or phantom limb pain). Neuropathic pain conditions are often persistent and difficult to treat in older adults, as they are more susceptible to the side effects of commonly used neuropathic pain medications such as oversedation and falls. However, it is important to begin treatment as early as possible with a regimen associated with the least amount of harm in order to prevent long-term complications of persistent pain such as physical and psychological disability.
Aging and Its Effect on Pain Perception
Age-related changes in the nervous system may alter pain perception. A decrease in the number of pain receptors in the skin and other organs, altered nerve conduction, and even central nervous system changes have been linked to altered sensory processing in older adults. Similarly, studies of experimental pain have also been linked to age-related changes in perception. These experimental pain studies used a heat probe, electrical stimulation, or other methods to induce pain in volunteers in an effort to identify a pain threshold or pain tolerance. These studies reveal that aging may decrease sensitivity for pain of low intensity; reduced sensitivity is especially apparent for heat pain; and aging has no strong effect on pain tolerance. It is believed by most researchers that age-associated changes in pain perception are subtle, and their clinical relevance may be minimal.
Conversely, older adults may have sensory impairments (visual or auditory), cognitive impairment, or sensory neuropathies that may interfere with the ability to appropriately communicate or even recognize pain symptoms.
Biopsychosocial Model for Older Adults With Pain
Chronic pain requires a multidimensional approach to treatment that incorporates the biological, psychological, and social factors that modulate a person’s pain. The biopsychosocial model (BPS) of chronic (persistent) pain describes the intricate interplay between these factors. The original BPS model has since been adapted for older adults (Figure 68-1). A number of biological factors are associated with chronic pain in older adults, including age (usually described as ≥ 65 years of age), sex, multiple coexisting chronic medical conditions, genetic factors, common coexisting symptoms (fatigue and sleep disturbance), and a variety of health-related behaviors (smoking, alcohol use, and illicit drug use). Psychological considerations include depression, anxiety, stress, substance misuse or abuse, pain-specific psychological factors (pain catastrophizing, pain coping, fear avoidance, and self-efficacy). Social influences include race/ethnicity, culture, socioeconomic status, ageism and elder abuse, social support, and social isolation. Ultimately, it is the combination of these biopsychosocial factors that determine outcomes like physical function, cognitive function, and quality of life.
FIGURE 68-1. Biopsychosocial model of pain for older adults. (Reproduced with permission from Miaskowski C, Blyth F, Nicosia F, et al. A biopsychosocial model of chronic pain for older adults. Pain Med. 2020;21[9]:1793–1805.)
Diagnosis of Pain in Older Adults
The diagnosis of pain relies heavily on self-report from the patient, which is considered the gold standard in measuring pain in both clinical practice and research. A thorough assessment of pain should include a series of questions regarding the location of pain, along with its onset, frequency, intensity, quality of pain, and any modifying factors. When inquiring about pain intensity, it may be helpful to ask the patient about a range of their pain intensity over the past week, rather than solely in the present moment. The assessment of the pain characteristics should be done regularly, both at the initial visit to accurately diagnose a patient and the source of pain (ie, musculoskeletal, visceral, neuropathic, or a combination of the above) and also at follow-up visits to assess the benefit of any treatment rendered. As discussed in the previous sections, the longer pain goes untreated, the more likely for the pain to spread to neighboring areas of the body and become more difficult to diagnose.
Other pertinent questions in the assessment of pain include any history of trauma (eg, falls or injuries) or indirect trauma (eg, whiplash), especially given the risk for falls and occult fractures in older adults. Sleep quality should also be assessed for any indicators of poor sleep quality (eg, insomnia, frequent arousals, daytime hypersomnolence), which could impact the patient’s perception of pain. With the patient’s permission, a discussion of any recent personal stressors, such as the loss of a loved one, that may have contributed to the onset and/or intensity of pain may be relevant, particularly in neuropathic pain conditions where psychological stress is a known trigger.
In patients with pain, a review of systems should include whether the patient reports generalized muscle pain, generalized joint pain, swelling or erythema, or neurologic symptoms like numbness, tingling, or autonomic symptoms. The medical history should also be reviewed for any other chronic pain conditions and their history of management, which can help clinicians assess what treatment modalities have alleviated pain in the past. Other conditions associated with pain include psychiatric conditions, such as anxiety, depression, and posttraumatic stress disorder, which can potentially
contribute or exacerbate symptoms of pain and may indicate a need for interdisciplinary treatment with a pain psychologist. Understanding the patient’s lifestyle and social history can also be informative, including their average daily activities, their stress levels, and their social support system. In addition, any history of drug addiction or alcoholism either in the patient or their family should be documented to assess their risk for opioid use disorder.
In older patients, obtaining an accurate history of their pain symptoms and review of systems may be difficult due to cognitive limitations and inability to respond to questions. In older adults, the line of questioning should be simple and transparent. For example, asking “Does this hurt?” may be more informative than “How are you doing?” For patients with difficulty responding to pain questions, a nonverbal observational pain scale may be indicated to better assess their pain. In patients with cognitive impairment or dementia, family or caregivers should also be asked to provide information regarding the patient’s pain.
Various verbal and nonverbal pain assessment scales have been reviewed in the evaluation of pain in older patients. These scales are generally classified as unidimensional or multidimensional. A unidimensional pain scale focuses on a single factor, such as the severity of pain or impact of pain on function. In patients with chronic pain, the impact of the pain on physical and mental function, such as their mobility, mood, and fatigue, may be more meaningful than the pain intensity itself. Table 68-3 provides a review of unidimensional pain scales. Multidimensional scales, on the other hand, include questions to assess pain intensity, interference with enjoyment of life, or history of pain treatments and relief. Table 68-4 shows a review of multidimensional pain scales that can be useful in assessing pain in older adults.
TABLE 68-3 ■ UNIDIMENSIONAL SCALES FOR PAIN MEASUREMENT
TABLE 68-4 ■ MULTIDIMENSIONAL PAIN SCALES FOR PAIN MEASUREMENT
Following the initial intake, a thorough physical examination should be completed. The affected location of pain should be evaluated for erythema or edema, which could be a marker of inflammation or infection. The musculoskeletal system can be evaluated by palpating for tender trigger points that duplicate the patient’s chief complaint, as well as assessing range of motion, posture, and gait. A neurologic examination for signs of focal muscle weakness, atrophy, or sensory impairments such as numbness or
tingling should also be performed to rule out any peripheral or central neuropathic condition.
TREATMENT
Physiologic Considerations and Medication Management in Older Adults Age-related organ dysfunction poses significant challenges for nonopioid and opioid pain medication management in older adults. The gastrointestinal, hepatic, renal, and respiratory systems are particularly important to consider when initiating and dosing pain medications in this population. There is a plethora of age-related pharmacokinetic changes that clinicians should consider before prescribing pain medications (Table 68-5). Decreases in gastric secretion and intestinal motility lead to altered absorption of certain medications. Aging is also associated with increased body fat and decreased lean body mass, total body water, and serum albumin—all of which affect the distribution of drugs throughout the body. Circulating albumin and other proteins bind analgesic medications, such as nonsteroidal anti-inflammatory drugs (NSAIDs) and tricyclic antidepressants (TCAs). More unbound drug can lead to increased toxicity and drug–drug interactions.
TABLE 68-5 ■ PHYSIOLOGICAL CHANGES ASSOCIATED WITH AGING
In addition, alterations in hepatic and renal function in older adults can affect drug metabolism and elimination. Hepatic blood volume and blood flow decrease with age. Similarly, increasing age is associated with decreased renal blood flow and glomerular filtration rate, which may lead to increased serum concentrations of renally cleared medications and their metabolites. The resultant decline in hepatic and renal function may lead to increased risk for adverse medication events and drug–drug interactions secondary to elevated serum parent drug and metabolite concentrations.
Lastly, changes in the pulmonary systems such as decreased elasticity of the lung and increased chest wall rigidity may lead to an increase in respiratory complications such as respiratory depression, which is of paramount importance when prescribing and dosing opioid medications.
Opioid Medication and Prescribing Practices for Older Adults
The first wave of the opioid epidemic began with the increased prescribing of opioids in the 1990s, with overdoses increasing since at least 1999.
Furthermore, an analysis of Medicare beneficiaries’ prescription opioid use in 2016 showed one in three Medicare Part D beneficiaries received a prescription for an opioid. Unfortunately, older adults are not immune from the detrimental consequences of opioid misuse disorder, including overdoses and deaths. The Centers for Disease Control and Prevention (CDC) reported an increase in drug overdoses and opioid deaths in the aging population, with a 7.7% increase in deaths related to opioid overdose in persons older than 65 years from 2013 to 2014. Older adults with opioid use disorder appear to be at higher risk of death compared to younger adults with the disorder. This disparity in death may be related to altered age-related organ function and accidental medication-related deaths.
In 2016, the CDC released its landmark guideline for prescribing opioids for chronic pain, which includes special considerations for prescribing practices in older adults. First, given reduced renal function and medication clearance even without renal disease, older adults may be more susceptible to the accumulation of opioids and, therefore, identifying the therapeutic window between safe dosages and higher dosages associated with respiratory depression and overdose may be extremely challenging.
Cognitive impairment may increase the risk for medication-related errors and opioid-induced confusion. Prescribing providers should pay particular attention to prescribing immediate-release opioids (recommendation #4);
prescribing the lowest effective dose possible (recommendation #5); and maintaining close follow-up within 1 to 4 weeks of starting opioid therapy, with subsequent follow-up at least every 3 months or more frequently (recommendation #7). Clinicians should also consider instituting bowel regimens to prevent constipation, performing risk assessments for falls, monitoring for cognitive impairment, and performing random urine drug screen monitoring, especially when there are other caregivers involved in the patient’s care. In response to the opioid epidemic in the United States, the CDC has also recommended that prescribers utilize state-wide prescription drug monitoring programs (PDMPs) prior to prescribing any opioid or other controlled substance to monitor a patient’s use of controlled substances and minimize the risk for abuse, overdose, and diversion.
Commonly Used Opioid Medications
For patients with moderate acute or chronic pain that has failed to respond to conservative management with nonopioid medications or nonpharmacological therapies, weak opioids such as hydrocodone, codeine, and tramadol may be considered after a thorough discussion with the patient about potential risks versus benefit. In patients with severe acute or persistent pain, more potent opioids are available, including morphine, methadone, fentanyl, oxycodone, buprenorphine, hydromorphone, and oxymorphone. Of note, in opioids combined with nonopioid analgesics, the maximum dose is typically dictated by the maximum dose of acetaminophen, NSAID, or aspirin. A table of commonly used opioid medications with recommended starting doses is featured in Table 68-6.
TABLE 68-6 ■ COMMONLY PRESCRIBED ORAL OPIOID PAIN MEDICATIONS
Despite the number of different opioid medications, they generally possess similar mechanisms of action and pharmacokinetics. The mechanism of opioid medications in alleviating pain is mediated by their binding to opioid receptors in the central nervous system to inhibit the ascending pain pathway. In terms of pharmacokinetics, opioids are rapidly absorbed in the gut and then undergo a high rate of first pass metabolism in the liver, where they are conjugated and form metabolites. Opioids then vary in their distribution due to their differing protein affinity, and lastly are excreted via bile to feces or via the kidneys. As discussed above, older adults may be prone to side effects of opioids due to slower gastrointestinal transit time associated with aging and risk of increased gastric pH due to concurrent use of proton pump inhibitors or antacids. In addition, the increased adipose tissue and decreased lean body mass and total body water leads to changes in
drug distribution and a longer time for elimination. Thus, with any opioid medication, older patients are at risk for side effects such as central nervous system depression, sedation, respiratory depression, and constipation.
Because of the increased risk of side effects in older adults, it is recommended to start opioids at a lower dose, about 25% to 50% of the dose given to younger patients. The CDC recommends carefully evaluating a patient for benefits and risks of opioid doses beyond more than 50 morphine milligram equivalents per day (MME/day) and avoiding doses more than 90 MME/day.
A consideration in the use of opioids in managing chronic pain is the risk for development of opioid-induced hyperalgesia. Opioid-induced hyperalgesia refers to a state of nociceptive sensitization in which patients taking opioids for the treatment of pain develop hyperalgesia, or increased sensitivity to painful stimuli. The mechanism of opioid-induced hyperalgesia is not entirely clear, but is thought to be due to neuroplastic changes in the peripheral and central nervous system that lead to sensitization of pronociceptive pathways. While all patients receiving opioids are at risk for opioid-induced hyperalgesia, groups with increased risk include patients with a history of opioid use disorder, chronic pain patients receiving opioids, and patients on high doses of potent opioids. Clinical signs of opioid-induced hyperalgesia include lack of effectiveness of opioids in the absence of disease progression, which may be difficult to distinguish from opioid tolerance. However, in contrast to patients with opioid tolerance, patients with opioid-induced hyperalgesia will often show increased levels of pain with increased doses of opioids. Patients may also report symptoms such as diffuse allodynia beyond the region of the original pain. Management of patients with opioid-induced hyperalgesia includes attempting to gradually reduce or eliminate the opioid and evaluate symptoms. Combination therapy with nonopioid analgesics, opioids with unique properties such as buprenorphine, or NMDA receptor antagonists may prove to be more effective in patients with opioid-induced hyperalgesia.
Management of Acute and Postoperative Pain
Management of acute pain and pain around the time of surgery is challenging. Poorly controlled and undertreated pain is associated with prolonged hospitalization, delayed wound healing, increase in health care utilization and costs, poor patient satisfaction, and adverse psychological
consequences. In addition, one of the most significant long-term consequences of poorly treated acute postsurgical pain is the development of chronic pain. However, advances in modern medicine offer promising new approaches for the management of acute pain, especially postoperative pain.
Enhanced Recovery after Surgery (ERAS) protocols have become more commonplace across the United States, especially in light of the opioid epidemic. The mainstay of ERAS protocols is the judicious use of multimodal analgesia. ERAS protocols attempt to minimize the use of opioids and offer opioid-sparing techniques such as intravenous or oral acetaminophen, intravenous or oral NSAIDs, intravenous lidocaine infusion, intravenous ketamine infusion, liposomal bupivacaine, neuraxial and peripheral regional anesthesia regional techniques for select surgeries, as well as patient-controlled modalities. Opioids still play an important role in the treatment of acute and postoperative pain, but are limited by their side effects and other sequelae, especially in older adults. If using opioid medications for acute pain in the inpatient setting, it is important to consider the following risk factors for opioid-related respiratory depression (and other adverse events), including a history of sleep apnea, obesity, concomitant administration of other respiratory depressant drugs such as benzodiazepines, opioid-naïve patients, opioid-tolerant patients, and chronic obstructive pulmonary disease. While supplemental oxygen is commonly used after surgery, it may produce acceptable oxygen saturation levels via pulse oximetry even with substantial hypoventilation. Thus, patients with risk factors for opioid-related respiratory depression should be considered for perioperative monitoring with continuous end-tidal carbon dioxide monitoring, especially during the first 24 hours after surgery.
Management of Chronic Pain
For patients with mild chronic pain, nonopioid analgesics, such as NSAIDs or acetaminophen, are recommended. These medications are available in tablet, capsule, topical, and injection form with many being available over the counter.
Acetaminophen Acetaminophen (Tylenol) is a widely available analgesic and antipyretic that can be used for mild-to-moderate pain of any etiology. While its mechanism of action is not yet fully understood, it is thought to activate descending serotonergic inhibitory pathways in the central nervous system to reduce pain perception. While acetaminophen is considered relatively safe in
its side effect profile when compared to NSAIDs, the main adverse reaction is hepatotoxicity due to acetaminophen overdose. In older adults, initial dosing is recommended at 500 to 1000 mg every 6 hours. The maximum total daily dose of acetaminophen is 4000 mg. In patients with hepatic impairment or a history of alcohol use, a maximum daily dose of 2000 to 3000 mg is recommended. Given the availability of acetaminophen as an over-the- counter analgesic, prescribed medication, as well as combined with other medications, such as opioids, it is important to review all of a patient’s current medications to ensure the patient is not taking higher than the recommended dose.
Nonsteroidal anti-inflammatory drugs (NSAIDs) NSAIDs represent another class of widely available medications that has both anti-inflammatory and analgesic effects that alleviate mild-to-moderate pain, and in particular, pain of an inflammatory origin, such as osteoarthritis or rheumatoid arthritis. There are a variety of NSAIDs available either as a prescription or over the counter, each with different dosing regimens and tolerability profiles. Often, if one particular type of NSAID is not effective for a patient, it may be worthwhile trying a different NSAID. Table 68-7 reviews the most common NSAID medications and their doses.
TABLE 68-7 ■ COMMONLY PRESCRIBED NONSTEROIDAL ANTI-INFLAMMATORY DRUGS (NSAIDS)
NSAIDs work to reduce pain by reversibly inhibiting the activity of cyclooxygenase enzymes, COX-1 and COX-2, and consequently preventing the synthesis of prostaglandins, which mediate inflammation as well as the transmission of pain. NSAIDs that act as nonselective COX inhibitors are not recommended for long-term use due to side effects associated with inhibition of COX-1. COX-1 is present throughout the body and plays a role in protecting the gastric mucosal lining and maintaining renal and hepatic function, among other functions. Inhibition of COX-1 is associated with increased risk for gastrointestinal ulcers and bleeding. Thus, patients on NSAIDs for any period of time should be monitored for any symptoms of gastrointestinal pain or bleeding. Caution should be given when prescribing either NSAIDs or aspirin to older adults who are on anticoagulant therapy due to increased risk of bleeding. Other complications of long-term NSAID use include increased risk for cardiovascular events and kidney disease.
COX-2, on the other hand, is present throughout the body in lower concentrations and is expressed in response to injury and inflammation. Utilization of NSAIDs that selectively target and inhibit COX-2 enzymes, such as celecoxib (Celebrex), can effectively reduce inflammation and pain, while sparing patients from the organ toxicity associated with other NSAIDs. However, COX-2-selective NSAIDs are not risk-free and, similar to
nonselective COX-inhibiting NSAIDs are associated with increased risk for cardiovascular events.
For older adults at risk for adverse events associated with systemic NSAIDs, topical agents such as diclofenac 1% gel, represent a safe form of an anti-inflammatory treatment that can be massaged onto intact skin overlying painful areas, such as joints affected by osteoarthritis or other inflammatory conditions.
Other Adjunct Medications for Chronic Pain
In addition to opioids analgesics, a number of other medications may be helpful in the management of chronic pain depending on the diagnosis and etiology of the patient’s pain. Table 68-8 reviews common medications used in the management of chronic pain. These medications are often categorized as being “adjuvant” or “co-analgesics,” as their initial indication was for conditions other than pain, such as depression or seizures; however, many of these medications can be used as a first-line treatment option for specific chronic pain conditions.
TABLE 68-8 ■ NONOPIOID COANALGESIC MEDICATIONS
Tricyclic antidepressants (TCAs) and serotonin-norepinephrine reuptake inhibitors (SNRIs) can be used in the management of chronic pain. By increasing the activity of serotonin and norepinephrine, these medications modulate the pain pathway to inhibit ascending pain pathways and promote descending pain inhibitory pathways. These medications have been shown to be effective in treating neuropathic pain as well as musculoskeletal pain.
Despite their efficacy at low doses, TCAs like amitriptyline and nortriptyline possess potent anticholinergic activity, which can lead to adverse effects, particularly in older individuals. These adverse effects can include memory impairment, confusion, and hallucinations. Patients are also at risk for dry mouth, blurred vision, constipation, nausea, urinary retention, impaired sweating, and tachycardia. SNRIs should also be initiated with caution due to their serotonergic activity and risk for seizures. Additionally, Beers Criteria
(American Geriatric Society, 2019) recommend avoidance of SNRIs in patients with a history of falls and fractures.
For patients with chronic neuropathic pain conditions, such as diabetic neuropathies, postherpetic neuralgia, and trigeminal neuralgia, medications in the category of anticonvulsants may be helpful. These include gabapentinoids, such as gabapentin and pregabalin, and sodium channel blockers, such as carbamazepine and oxcarbazepine. These medications have a high risk for side effects such as central nervous system depression, fatigue, dizziness, and disorientation. Such disorientation can cause patients to take the incorrect dosing or precipitate falls. Thus, dosing should always be initiated at a low dose with a gradual increase of the dose as needed to treat symptoms. Noteworthy, precautions include prescribing gabapentinoids with opioids due to the combined effects of central nervous system depression.
Clonazepam is a benzodiazepine that may be indicated for chronic neuropathic pain conditions, such as tinnitus or burning mouth syndrome, as well as movement disorders. However, benzodiazepines must be prescribed with caution in older adults because of the potential risk for sedation, confusion, and falls. For similar reasons, benzodiazepines and opioid medications should not be taken concomitantly. In addition, due to the risk for abuse and addiction, benzodiazepines such as clonazepam are included along with opioids in state-wide prescription drug monitoring programs and measured as lorazepam milligram equivalents (LMEs). Older adults taking daily benzodiazepines for more than 30 days may develop physiologic dependence and should be gradually tapered to avoid risk of withdrawal symptoms.
While sodium channel blockers like carbamazepine and oxcarbazepine represent highly efficacious frontline treatments for neuropathic pain such as trigeminal neuralgia, these medications require caution and close monitoring when prescribing. Due to the side effects of sedation, clinicians should consider advising the patient to have a family member or caregiver monitor the patient when first initiating carbamazepine or oxcarbazepine. Due to these drugs’ effect on sodium levels and on the bone marrow, regular monitoring of sodium levels along with serum drug levels, a complete blood count with differential, and liver and renal function tests are recommended when initiating either of these medications. In addition, patients with either a
HLA-B*1502 and HLA-A*3101 allele (most common in South Asian ancestry) are at risk for severe skin reactions, such as Stevens–Johnson
syndrome, and thus, genotype screening may be indicated. Carbamazepine is also a potent cytochrome P 450 inducer that can interfere with the metabolism of other drugs, particularly those with narrow therapeutic indices like warfarin or lithium.
In patients with moderate-to-severe chronic musculoskeletal pain, muscle relaxants may be indicated to alleviate pain. A variety of muscle relaxants are available, including baclofen, methocarbamol, tizanidine, and cyclobenzaprine. While these medications are typically indicated on an as- needed basis to treat muscle spasm, patients with chronic musculoskeletal pain such as temporomandibular joint and muscle disorders caused by parafunctional habits (ie, jaw clenching and tooth grinding) may benefit from a nightly dose to reduce their pain. However, all muscle relaxants carry potential side effects of sedation and anticholinergic effects. The lowest possible dose is recommended when initiating therapy, with patients gradually increasing the dose either as needed or as tolerated.
Nonpharmacological Treatments
Chronic pain in older adults is often treated with a multimodal pain management approach: a combination of medications, physical therapy, psychological interventions, or interventional pain management.
Interventional pain management is defined as the discipline of medicine devoted to the diagnosis and treatment of pain-related disorders, principally with the application of interventional techniques (commonly referred to as injection therapies), independently or in conjunction with other treatment modalities. Interventional pain procedures are performed by a myriad of providers, including, but not limited to, anesthesiologists, physiatrists, neurologists, neurosurgeons, orthopedic surgeons, rheumatologists, and radiologists. Interventional pain procedures have the added benefit of targeting specific nociceptive transmission sites with the goal of minimizing the intake of oral medications and their end organ effects. Thus, interventional pain management techniques offer older adults an alternative treatment pathway with potentially fewer side effects.
Low back pain Lumbar epidural steroid injection (ESI) is a commonly used procedure for treating lumbar spinal stenosis, lumbar disc herniation, lumbar degenerative disc disease, and lumbosacral radicular pain. Pertinent imaging such as plain films, magnetic resonance imaging (MRI), and computed tomography (CT) scans should be reviewed before performing an ESI,
although physical examination findings and pain symptoms do not always correlate to image findings. ESIs are preferably performed with image guidance (CT or fluoroscopy) for increased accuracy using an interlaminar approach (midline or paramedian) or transforaminal approach. The transforaminal approach (placement of a needle within the neuroforamen) is preferentially used for patients with lumbosacral radicular pain who may also have low back pain. Contraindications to ESIs include coagulopathy, current anticoagulation use, infection (localized near injection site or systemic), uncontrolled diabetes, allergy to medication being injected (contrast, local anesthetic, steroid), anatomic changes that would prevent a safe procedure (congenital or surgical), or immunosuppression. Potential complications include bleeding, infection, neural injury, inadvertent injection of steroid or local anesthetic outside of the epidural space, and an allergic reaction to medications administered.
There is considerable controversy surrounding the efficacy of ESIs. The results of clinical trials are influenced by the type of interventional pain management specialist, injection approach (interlaminar vs transforaminal), pain type, and injectate. Nonetheless, there is general consensus that ESIs provide short-term relief (weeks to months) in well-selected patients. ESIs should be used in combination with other forms of pain management with the goal of reducing pain and improving function.
Lumbar Facet Injections Lumbar facet-mediated pain is a common cause of low back pain in older adults that is often associated with functional limitations. Lumbar facet joints, or zygapophyseal joints, are synovial joints in the lumbar region that are innervated by the medial branch (MB) nerves of the dorsal primary ramus. Lumbar facet-mediated pain may manifest in the low back and commonly refers pain to the groin, hip, or thighs, but rarely below the knee. Patients commonly report pain with bending over, twisting movements, lateral rotation, and prolonged sitting or standing. Typical examination findings include pain with direct palpation over the facet joints and pain with extension and lateral rotation, also known as facet loading.
Diagnostic studies (plain films, CT, or MRI) can be helpful in characterizing the degree of lumbar facet arthropathy (mild, moderate, or severe). Lumbar facet mediated-pain injections can be performed by injecting directly into the joint space (intra-articular) or by aiming the injection at the junction of the superior articular process and transverse process where the MB nerves reside. Facet MB joint diagnostic nerve blocks are most commonly
performed due to fair to good evidence regarding efficacy. In addition, an 80% reduction in pain after facet MB joint nerve blocks offers the added advantage of proceeding to MB nerve radiofrequency denervation in an attempt to provide longer lasting relief.
Sacroiliac Joint Injections The sacroiliac joints (SIJs) are small, diarthrodial joints located at the junction of the sacrum and ilium, whose primary purpose is to act as a shock absorber for the spine. The ability for the SIJ to absorb shock from the spine decreases with age and is associated with increased pain. SIJ degenerative changes may be seen on imaging studies (plain films, CT, or MRI). Physical examination findings may include tenderness to palpation directly over the joint space or positive provocative tests (ie, Faber, distraction, compression, Gaenslen’s, or thigh thrust). The SIJ can refer pain to the hip, buttocks, sacrum, or thighs, but usually does not extend past the knee. It can be difficult to distinguish SIJ pain from other causes of low back pain like discogenic pain, lumbar myofascial pain, or lumbar facet- mediated pain. Like facet-mediated pain, intra-articular steroid injections or nerve blocks of the L5 dorsal primary rami and the lateral branch (LB) of the S1-3 dorsal rami may be performed. Evidence regarding the efficacy of
intra-articular SIJ with steroids is limited. The best recommendation is to use these injections in combination with other multimodal pain management including physical therapy, medication management (muscle relaxants, anti- inflammatories), SIJ stretching exercises, heat, ice, or transcutaneous electrical nerve stimulation unit. If 80% relief from the nerve blocks is appreciated, the patient may proceed to radiofrequency denervation of the L5 dorsal primary rami and the lateral branch (LB) of the S1-3 dorsal rami for more prolonged relief.
Percutaneous Vertebral Augmentation for Compression Fractures VCFs are the most common fragility fracture reported in the literature. VCFs, by definition, compromise the anterior half of the vertebral body and the anterior longitudinal ligament leading to a characteristic wedge-shaped deformity. In older adults, the most common risk factor for VCFs is osteoporosis. The prevalence is highest among women older than 50 years, affecting an estimated 25% of postmenopausal women in the United States. An estimated 40% to 50% of people older than 80 years have sustained a VCF. Furthermore, people who have had one osteoporotic VCF are at five times the risk of sustaining a second VCF. Only a quarter of VCFs result from falls; most are precipitated by routine daily activities such as bending or lifting.
VCFs most frequently occur at the thoracolumbar junction (ie, the segment from T12 to L3) and can be diagnosed with plain films, CT scan, or MRI. An MRI may provide more additional information regarding the acuity of the VCF, regarding whether it is, acute, subacute, or chronic. Patients afflicted with VCFs report wide-ranging symptoms including no pain, while others report severe pain. Symptomatic patients may benefit from conservative treatment including pain medications, immobilization via bracing, and physical therapy, while others fail to respond.
Two minimally invasive procedures, vertebroplasty (VP) and kyphoplasty (KP), are commonly used to treat persistent, acutely painful VCFs. In VP, cement is injected percutaneously into the fractured vertebral body via a cannula. KP is similarly performed percutaneously, but involves injection of the cement into an inflated balloon that creates a cavity that encapsulates the cement. Potential complications of both VP and KP include cement leakage into the spinal canal with resultant neurologic deficits and cement leakage into surrounding vascular structures with subsequent risk of pulmonary embolism. Contraindications to VP and KP include existing coagulopathy or use of blood thinners, burst fractures with retropulsed bone, and vertebral height loss greater than 66%.
Osteoarthritic joint pain and the use of interventional pain management Osteoarthritis (OA) is highly prevalent in older adults, with 60% of people 65 years and older diagnosed with arthritis or chronic joint pain. OA is a leading cause of disability, economic burden, and functional decline in older adults. OA is characterized by an active, dynamic process arising from an imbalance between the repair and destruction of joint tissues. Structural alterations in articular cartilage, adjacent bone, ligaments, synovium, and capsule have all been described in OA. Interventional pain management for knee and hip OA offers older adults nonsurgical and relatively low risk pain management therapy options. Interventional pain management for knee and hip OA is most often performed using fluoroscopy, MRI, CT scan, or ultrasonography (US) to help guide needle placement and increase accuracy of the procedure.
Intra-Articular Injections In general, three types of intra-articular injections may be performed including corticosteroid, hyaluronic acid (HA) also known as viscosupplementation, and platelet-rich plasma (PRP). These injections are commonly offered to patients with moderate-to-severe symptomatic OA that have responded unfavorably to physical therapy or nonsteroidal anti- inflammatory medications. Corticosteroid medications usually contain 40 to
80 mg of a depot steroid preparation such as triamcinolone acetate or methylprednisolone acetate in combination with a local anesthetic such as bupivacaine or lidocaine.
HA is naturally found in the knee and acts as a shock absorber and lubricant. Decreasing levels of HA are associated with OA. There are several commercially available HA preparations in the United States including Supartz, Synvisc, and Orthovisc. These formulations vary in origin (rooster combs vs bacterial), molecular weight, and the number of recommended injections (1–5).
PRP is the injection of autologous plasma with a high concentration of platelets into an affected joint or other area of interest. PRP injections are thought to relieve OA-related symptoms via the interaction of platelets, intrinsic joint tissues, and the release of growth factors and cytokines by activated platelets, which ultimately act to reduce inflammation and promote local healing.
In general, all three types of injections are safe and have relatively few side effects. Decisions about frequency, type, and imaging guidance for injection therapies are highly variable, but should be tailored to the patient.
Knee Pain Injection Treatments The knee is the largest joint in the body and the most common site of OA. Its prevalence is two times higher in men than women. Frequently reported symptoms are pain with ambulation, crepitus (cracking or popping sensation), warmth with palpation of the joint, or effusion (ie, swelling). Patients are good candidates for knee injections if they have radiographic evidence of moderate or severe OA. Corticosteroid, viscosupplementation, and PRP may be injected into the knee joint. In the short term (< 3 months), intra-articular steroid injections of corticosteroids and local anesthetic mixtures are more effective than placebo. However, long-term benefits are not well established. PRP injections for knee OA, especially end-stage OA, show promising efficacy (up to 12 months in some studies). However, there remains high variability in the administration of PRP including injection techniques, injection schedules, and number of centrifugations of autologous plasma, which can drastically alter the concentration of platelets injected.
Hip Pain Injection Treatments The hip joint is a ball and socket joint consisting of the articulation of the femoral head with the acetabulum. OA-related hip pain commonly travels to the groin but can also travel to the low back or buttock area. Patients report increased pain with standing and walking. Pain with
internal rotation of the hip is a hallmark physical examination finding. Corticosteroid in combination with a local anesthetic is the most commonly performed intra-articular hip injection, which is done under the guidance of imaging (fluoroscopy, CT, or ultrasound). Additional clinical studies are needed to evaluate the short- and long-term efficacy of HA and PRP injections for hip pain.
Neuropathic pain conditions and the use of interventional pain management Neuropathic pain conditions are more common in older adults, as the risk for conditions that cause neuropathic pain increase with age. For example, conditions like diabetes, herpes zoster, stroke, as well as cancer treatment can all result in neuropathic pain. While most patients respond to standard medications like anticonvulsants and gabapentinoids, older adults may not tolerate frontline treatment with anticonvulsants or gabapentinoids due to side effects of sedation and impaired cognition. Although relatively rare, neuropathic pain affecting the orofacial pain region, such as trigeminal neuralgia and postherpetic neuralgia, can be particularly difficult to manage due to its impact on speaking, chewing, and swallowing. Nerve blocks represent a safe, minimally invasive treatment option that can be both diagnostic, allowing clinicians to localize the source of pain, and therapeutic, in reducing the intensity or frequency of pain. For patients with suspected neuropathic pain of the facial region, an MRI of the brain with trigeminal protocol is recommended to rule out any mass lesion or vascular compression of the trigeminal nerve. Nerve blocks frequently performed in the orofacial region include inferior alveolar nerve blocks for trigeminal neuralgia of the V3 distribution, posterior superior alveolar nerve blocks for trigeminal neuralgia of the V2 distribution, occipital nerve blocks for occipital neuralgia and cervicogenic headaches, and sphenopalatine ganglion blocks for cluster headaches. While the duration of relief with a nerve block can be variable, for some patients, periodic nerve blocks can be helpful for treating flares of pain while avoiding polypharmacy with multiple analgesics and anticonvulsant medications. For older adults with persistent, refractory orofacial pain who cannot tolerate frontline treatments such as anticonvulsant medications, referral to neurosurgery is recommended for consideration of microvascular decompression of the trigeminal nerve, rhizotomy, or gamma knife radiosurgery targeting the trigeminal nerve.
Behavioral Therapies for the Treatment of Pain
In addition to the aforementioned pharmacological and nonpharmacological interventions, behavioral therapy plays a central role in the management of pain, particularly in older patients who are prone to the side effects of medications or may not be candidates for interventional treatment.
Behavioral therapies represent safe and noninvasive techniques targeted to reduce pain, restore function and mobility, and improve overall physical and mental health. In fact, many patients themselves often directly state to their doctors that they are eager to try a more conservative treatment rather than initiate a new medication or undergo an invasive procedure.
For patients with chronic musculoskeletal pain, such as osteoarthritis or myofascial pain, physical therapy represents a noninvasive treatment to improve mobility, alleviate pain, and ultimately reduce disability and improve function for patients. For patients who do not have access to physical therapy, home care with stretching exercises and use of moist heat or ice as needed can help improve pain. In chronic pain conditions such as fibromyalgia, physical activity is recommended to reduce symptoms of pain and stiffness with the ultimate goal of improving daily function. While a combination of aerobic, resistance, and flexibility training is ideal, each regimen of exercise should be tailored to the individual and their functional goals. For example, in older patients with chronic pain and stiffness, a low impact alternative such as aquatic-based physical therapy or aquatic-based exercise may be indicated to maintain their mobility and reduce their pain.
Patients with chronic pain frequently experience significant emotional limitations due to their condition. In addition to their persistent, distressing pain, they may experience difficulties in obtaining a diagnosis and effective management, leading to feelings of discouragement and depressed mood.
Some patients may develop disability due to their pain and isolate themselves from others. Psychological interventions have been well studied in the treatment of chronic pain conditions such as fibromyalgia, chronic low back pain, and rheumatoid arthritis. The goal of this treatment is to help patients cope with their pain and minimize any disability and distress.
Evidence-based psychological interventions for patients with chronic pain include both talk-based and behavioral therapy, such as cognitive behavioral therapy, acceptance and commitment therapy (ACT), and biofeedback. For patients who either lack access to a psychologist trained in cognitive behavioral therapy or are otherwise resistant to initiate treatment with a psychologist, mindfulness-based stress reduction techniques may be helpful
as another mind-body approach to improve a patient’s physical and mental health. Examples of stress reduction techniques include meditation, yoga, deep breathing exercises, and body scans. Such techniques are evidence- based to improve chronic pain such as low back pain, with the underlying goal of helping patients increase their awareness from moment to moment and allow them to accept any uncomfortable emotions and physical discomfort.
There are various complementary and alternative therapies that can potentially reduce pain, including acupuncture, massage therapy, Reiki healing, and dietary modifications or supplements, which many patients may seek out as an alternative to conventional Western medicine. As a clinician, it is important to support the patient as they develop their own approach to adapt to their chronic pain and maintain functionality and discuss these treatment options. Dietary modifications and supplements should also be reviewed with the patient to ensure no harmful side effects or interactions with their current medication regimen.
FURTHER READING
Booker SQ, Herr KA. Assessment and measurement of pain in adults in later life. Clin Geriatr Med. 2016; 32(4):677–692.
Brooks AK, Udoji MA. Interventional techniques for management of pain in older adults. Clin Geriatr Med. 2016;32(4):773–785.
By the 2019 American Geriatrics Society Beers Criteria® Update Expert Panel. American Geriatrics Society 2019 Updated AGS Beers Criteria® for potentially inappropriate medication use in older adults. J Am Geriatr Soc. 2019;67(4):674–694.
Centers for Disease Control and Prevention. (2016, January 1). Increases in drug and opioid overdose deaths—United States, 2000–2014. Morbidity and Mortality Weekly Report. https://www.cdc.gov/mmwr/preview/mmwrhtml/mm6450a3.htm? s_cid=mm6450a3_w. Accessed May 29, 2021.
Chau DL, Walker V, Pai L, Cho LM. Opiates and elderly: use and side effects. Clin Interv Aging. 2008;3(2):273–278.
Chiu IM, von Hehn CA, Woolf CJ. Neurogenic inflammation and the peripheral nervous system in host defense and immunopathology. Nat
Neurosci. 2012;15(8):1063–1067.
Dahlhamer J, Lucas J, Zelaya, C, et al. Prevalence of Chronic Pain and High- Impact Chronic Pain Among Adults — United States, 2016. MMWR Morb Mortal Wkly Rep 2018;67:1001–1006.
Department of Health and Human Services. (2016). Opioids in Medicare Part D: Concerns about extreme use and questionable prescribing. Office of the Inspector General. https://oig.hhs.gov/oei/reports/oei-02-17- 00250.asp. Accessed May 29, 2021.
Dowell D, Haegerich TM, Chou R. CDC guideline for prescribing opioids for chronic pain—United States, 2016. MMWR Recomm Rep.
2016;65(No. RR-1):1–49.
Lautenbacher S, Peters JH, Heesen M, Scheel J, Kunz M. Age changes in pain perception: a systematic-review and meta-analysis of age effects on pain and tolerance thresholds. Neurosci Biobehav Rev. 2017;75: 104– 113.
Lee M, Silverman SM, Hansen H, Patel VB, Manchikanti L. A comprehensive review of opioid-induced hyperalgesia. Pain Physician. 2011;14(2):145–161.
Mitra S, Carlyle D, Kodumudi G, Kodumudi V, Vadivelu N. New advances in acute postoperative pain management. Curr Pain Headache Rep.
2018;22(5):35.
Pergolizzi J, Böger RH, Budd K, et al. Opioids and the management of chronic severe pain in the elderly: consensus statement of an International Expert Panel with focus on the six clinically most often used World Health Organization Step III opioids (buprenorphine, fentanyl, hydromorphone, methadone, morphine, oxycodone). Pain Pract.
2008;8(4):287–313.
Treede RD, Rief W, Barke A, et al. Chronic pain as a symptom or a disease: the IASP Classification of Chronic Pain for the International Classification of Diseases (ICD-11). Pain. 2019;160(1):19–27.
Williams ACC, Fisher E, Hearn L, Eccleston C. Psychological therapies for the management of chronic pain (excluding headache) in adults.
Cochrane Database Syst Rev. 2020;8(8):CD007407.
Chapter
69
Management of Common Nonpain Symptoms
Christine S. Ritchie, Alexander Smith, Christine Miaskowski
OVERVIEW: COMMON NONPAIN SYMPTOMS IN OLDER ADULTS
Advances in medicine have led to our ability to stave off many acute life- threatening events and have contributed to the development of numerous co- occurring chronic conditions defined as multimorbidity. The experience of multiple conditions is often characterized by an array of symptoms associated with these conditions and/or their treatments. The presence of bothersome symptoms in older adults may contribute to illness burden in ways that may not be predictable on the basis of any one diagnosed disorder. However, these symptoms have a negative influence on older adults’ function and quality of life (QoL).
Many symptoms are addressed throughout this textbook: anorexia is covered in Chapter 30, sleep disorders in Chapter 44, dizziness in Chapter 45, pain in Chapter 68, depressed mood and anxiety in Chapters 65 and 66, and constipation in Chapter 87. The focus of this chapter is on overall symptom assessment, evaluation, and management of fatigue and shortness of breath, especially for those with advanced illness, and special considerations in the context of multimorbidity.
Multiple symptoms occur frequently in chronically ill older adults. Because many individual conditions, such as cancer, heart failure, and chronic obstructive pulmonary disease (COPD), are associated with high symptom burden, their accumulation contributes to a large and complex array of symptoms (Figure 69-1). In a nationally representative study of older adults that assessed for pain, fatigue, breathing difficulty, sleeping difficulty, depressed mood, and anxiety, 28% of the population had three or more
symptoms. Of note, reduced physical function and falls were more common in those with higher symptom burden. In a population-based sample of older adults using a 10-item symptom tool, the average number of symptoms was
3.7 and one-third of the population had five or more. In this cohort, the most common nonpain symptoms were fatigue (48%), weakness (39%), constipation (36%), anxiety (36%), anhedonia or depression (37%), shortness of breath (35%), and poor appetite (17%). In a study of 318 adults followed by a housecalls program, 43% reported severe burden from one or more symptoms. The symptoms with the highest severity ratings were depression, pain, loss of appetite, and shortness of breath. Few longitudinal studies of the general older adult population have assessed changes in the symptom experience over time. Among 754 older adults in their last year of life, the monthly occurrence of one or more “restricting symptoms” was fairly constant in the first half of the year prior to death (20%). At approximately 5 months prior to death, the rate increased rapidly from 27% to 57% in the month prior to death.
Learning Objectives
FIGURE 69-1. Pain and nonpain symptoms in common chronic conditions. Top three most common (in rank order) symptoms by condition. AIDS, acquired immunodeficiency syndrome; CHF, congestive heart failure; CKD, chronic kidney disease; COPD, chronic obstructive pulmonary disease.
Review the prevalence of and assessment strategies used to evaluate common nonpain symptoms in older adults.
Address the evaluation and management of fatigue and dyspnea.
Describe symptom management considerations in the context of multimorbidity.
Key Clinical Points
Many older adults experience a higher symptom burden associated with multiple co-occurring conditions. Screening for common symptoms should be a routine component of a comprehensive geriatric assessment (CGA).
Multicomponent interventions that rely on nonpharmacologic treatments are most effective for the management of fatigue and breathlessness.
Because the evidence base for symptom management in older adults is sparse, especially for those with multiple chronic conditions, a thoughtful approach that is informed by patient preferences is necessary to optimize benefit and minimize adverse effects.
OVERALL EVALUATION OF SYMPTOMS
A CGA, historically did not include an assessment of pain or nonpain symptoms. More recent CGA versions have included systematic assessments of these symptoms.
Many symptom measures were developed initially for cancer patients and only evaluate symptoms over short time intervals. Other symptom measures focus on a particular disease without capturing the impact of nonindex conditions on symptoms. With few exceptions, nondisease-specific symptom inventories are not available for older adults with multiple conditions. One screening tool developed from a community-dwelling population of 1000 older adults, the Brief Symptom Inventory, assesses 10 common symptoms in older adults: shortness of breath, feeling tired or fatigued, problems with balance or dizziness, weakness, daily pain, stiffness,
constipation, poor appetite, anxiety, and anhedonia. The tool shows discriminant and convergent validity. Another recently developed tool in Denmark captured 36 symptoms in a representative sample of 100,000 Danish people 20 years and older. The average number of symptoms reported by the population was 5.4. For each additional comorbid condition, one new symptom was added.
Many comprehensive symptom assessments were borrowed from oncology. The Memorial Symptom Assessment Scale (MSAS) measures the occurrence, severity, and distress of 32 symptoms and the Edmonton Symptom Assessment Scale measures 9 common symptoms on a 0 to 10 numerical rating scale. These instruments have been used to assess symptoms and multiple dimensions of the symptom experience in both geriatric and palliative care noncancer patients. The large number of items in the MSAS can be a practical barrier to its use in the clinical setting.
EVALUATION AND MANAGEMENT OF FATIGUE AND DYSPNEA
Evaluation of Fatigue
The prevalence of generalized fatigue in older adults varies from 5% to almost 70% depending on the measure used, the characteristics of the older adult population, the time of day fatigue is assessed, and the cut points used to determine fatigue. Regardless of the assessment tool, fatigue is associated with decreased function and increased mortality. In a study of 492 older primary care patients, the question “Do you feel tired most of the time?” identified older adults with a one and a half to almost twofold increased risk of mortality over 10 years. Other assessments for fatigue tend to be longer and assess multiple dimensions of fatigue, again, predominantly derived from oncology. The Brief Fatigue Inventory, a nine-item instrument that assesses severity of fatigue and the impact of fatigue on daily functioning in the past 24 hours, has good psychometric properties but has been predominantly used in cancer populations. More recently, the concept of fatigability developed as a complement to the symptom of fatigue. Fatigability measures how fatigued an individual feels in relation to defined activities (eg, how fatigued a person feels after a 5-minute treadmill test). It offers a more standardized way to measure fatigue but is more difficult to integrate into standard practice.
A more comprehensive assessment of fatigue should address (1) when the fatigue began, whether the onset was sudden or more gradual; (2) the pattern of fatigue over the course of a day; (3) what exacerbates or improves the fatigue; and (4) the impact that fatigue has on daily function and relationships. Sleep characteristics and medications should be reviewed.
Because psychological factors are often associated with fatigue, systematic assessment for depression is warranted.
The biopsychosocial model, commonly used to characterize pain, applies well to other symptoms, including fatigue. The biopsychosocial model posits that psychological, social, spiritual, and physical factors influence an individual’s symptoms (Figure 69-2). In the case of fatigue, other factors such as obesity, depression, social isolation, loneliness, and spiritual distress may influence the experience of fatigue. A number of physical conditions are characterized by fatigue including electrolyte disturbances, occult malignancy, polymyalgia rheumatica, occult hepatitis, or HIV. Fatigue may be a silent harbinger of anemia or infection. In older adults, where atypical presentations of acute conditions are common, new-onset fatigue may indicate a recent myocardial infarction (MI) or heart failure.
Hypogonadism is a less common cause of fatigue. Findings from the history and physical examination should dictate further diagnostic evaluation.
FIGURE 69-2. Biopsychosocial model.
Management of Fatigue
Very few studies have addressed the management of fatigue in older adults; more trials have focused on chronic fatigue syndrome in younger adults. A few studies have evaluated fatiguability in context of frailty. Approaches that have not shown promise include protein supplementation, thyroid supplementation (in subclinical hypothyroidism), or testosterone (in men).
Counseling, vitamin D3 (in those with vitamin D deficiency), and functional training have shown some promise. Improvement of older adults’ sleep hygiene through both nonpharmacologic and pharmacologic strategies (see Chapter 44) may reduce fatigue. While antidepressants may be beneficial, antidepressants with strong anticholinergic effects should be avoided (see Chapter 65). In the palliative care setting for older adults with advanced or life-limiting illnesses, psychostimulants such as methylphenidate or modafinil may be considered.
In a study of predominantly older prostate cancer patients (ages 52–94), methylphenidate reduced fatigue severity compared to control. However, a number of patients had to discontinue treatment due to increased blood pressure and tachycardia. A recent within-person crossover trial cast doubt on methylphenidate treatment of fatigue in advanced cancer. Among 43 participants who alternated between methylphenidate and placebo three times over a 9-day period, no improvement in fatigue during days with methylphenidate was observed. In older adults with known cardiovascular disease or arrhythmias, psychostimulants should not be used. While anecdotal reports touted the benefit of donepezil in treatment of opioid- induced fatigue in cancer patients, a subsequent clinical trial did not support these claims. In summary, fatigue is a common condition in older adults with likely multiple factors contributing to its prevalence.
Evaluation of Dyspnea (Breathlessness)
Due to the high rates of COPD and heart failure in older adults, along with an array of other comorbid conditions, shortness of breath (dyspnea) is very common in older adults. In an Australian primary care practice, over half of those who presented with shortness of breath were older than the age of 65.
The prevalence of dyspnea in community-dwelling older adults ranges between 17% and 62%, depending on the population studied and cut point used to define “dyspnea.” Dyspnea will likely occur at some point during a number of serious illnesses experienced by older adults (eg, cancer, heart
failure, advanced lung disease), and at the end of life. The biopsychosocial model applies to dyspnea as well as it does to fatigue. Anxiety, disappointment, financial stressors, and questions about meaning often contribute significantly to the experience of dyspnea. Dyspnea, in turn, serves as a source of patient and caregiver distress and is associated with decreased QoL, decreased function, and increased health care utilization.
Physiologic mechanisms of dyspnea often fall under three categories: increased respiratory effort due to obstruction (eg, COPD, asthma, masses) or restriction (eg, obesity, pleural effusion); weakness (eg, multiple sclerosis, amyotrophic lateral sclerosis); or ventilation/perfusion mismatch (eg, anemia, pulmonary embolism, heart failure). More subtle systemic changes can contribute to the occurrence of dyspnea, especially at the end of life. For example, in the National Hospice Survey, 24% of patients with no known cardiopulmonary disease experienced dyspnea. The most appropriate measure of shortness of breath is the older adult’s self-report. Dyspnea does not always correlate with hypoxia, hypercarbia, or the presence of tachypnea. Most self-report tools to assess dyspnea come from the obstructive lung disease literature. One of the most common assessment tools is the Medical Research Council (MRC) breathlessness scale. First published in the 1950s, the MRC scale characterizes, through five statements, a range of disability caused by level of breathlessness (Table 69-1). It correlates well with other dyspnea scales and with other direct measures of function, such as walking speed.
TABLE 69-1 ■ MEDICAL RESEARCH COUNCIL BREATHLESSNESS SCALE
Management of Shortness of Breath
Nonpharmacologic approaches to breathlessness have few side effects, unlike morphine for example, and avoid polypharmacy, a major concern in the older adults. They should be considered alongside pharmacologic approaches as first-line therapy. Nonpharmacologic approaches that can be helpful in the treatment of dyspnea include the use of a fan, breathing techniques, mindfulness and relaxation, anxiety management, and energy conservation. Breathing techniques that can reduce the sensation of dyspnea include pursed lip breathing, prolonged exhalation, and posture modification. In addition to mindfulness and relaxation, guided imagery and distraction strategies (eg, music, TV, reading by self or caregiver) were shown to reduce the sensation of breathlessness.
In patients with COPD and hypoxia, long-term oxygen therapy increases QoL and prolongs survival. Likewise, in a subset of patients with COPD, noninvasive ventilatory support such as with bilevel positive airway pressure (BiPaP) can improve QoL and prolong survival. In patients with advanced cancer and other nonpulmonary causes of dyspnea, the value of oxygen supplementation in improving outcomes is less clear. These results may be in part due to the fact that less than half of advanced cancer patients with dyspnea are hypoxic. In a study of nasal cannula–delivered air versus nasal cannula–delivered oxygen (median age 65), no difference was found in dyspnea relief between the two modalities when patients were not hypoxic;
in two studies of hypoxic cancer patients, more benefit from oxygen was noted. Recent studies of noninvasive ventilation in advanced cancer patients suggest that it may improve symptoms. In a multisite study of 200 predominantly older adults with advanced cancer (mean age 71), noninvasive ventilation was more effective than oxygen supplementation alone in reducing dyspnea and decreasing the amount of morphine needed to
control symptoms. However, more patients in the noninvasive ventilation arm discontinued treatment primarily due to mask intolerance and anxiety.
Pharmacologic treatment for dyspnea should first and foremost be directed to the underlying cause of dyspnea, if known. Treatment may include a β-agonist for COPD or a diuretic in the case of heart failure. In a subset of patients, breathlessness persists at rest or on minimal exertion despite optimal treatment of the underlying chronic condition. These patients are described as having refractory dyspnea. In patients with refractory dyspnea, opioids may be beneficial. A number of studies of opioids for the treatment of refractory dyspnea in the setting of advanced illness demonstrate some benefit. However, the findings of reduced dyspnea (compared to control) are not consistent, and many studies remain underpowered to produce conclusive findings. Unfortunately, many of these studies use morphine, which has active metabolites (eg, morphine-6-glucornide, morphine-3-glucoronide).
Morphine-3-glucoronide builds up in renal insufficiency, common in older adults and is responsible for symptoms of neurotoxicity (eg, hyperalgesia, allodynia, myoclonus). In older adults, opioids are associated with decreased mental functioning. For some patients, decreased mental functioning will not be acceptable despite its potentially positive impact on breathing.
IMPACT OF MULTIMORBIDITY ON SYMPTOM MANAGEMENT AND PATIENT DECISION MAKING
For older adults, multimorbidity is the norm not the exception. Over 90% of Americans age 65 and older have two or more chronic conditions. In older adults, symptom burden increases with increasing numbers of co-occurring conditions and is associated with doubled mortality rates and reduced QoL.
Symptom Management in Multimorbidity
Multimorbidity can constrain options for management of symptoms. Many pharmacologic treatments for symptoms have adverse effects that are magnified in patients with multimorbidity. For those with heart failure or hypertension, nonsteroidal anti-inflammatory agents often exacerbate these conditions. For those with cognitive impairment, opioids may make confusion more marked. Well known to those caring for older adults, initiation of one medication for one condition or symptom often leads to side effects that require use of a second medication to manage the side effects— the “prescribing cascade.” Analgesics provide an apt illustration of this phenomenon. The use of a nonsteroidal anti-inflammatory drug (for those in whom it is not contraindicated) often requires additional medications to reduce the risk of potential negative gastrointestinal (GI) (eg, the addition of a proton pump inhibitor to reduce risk of GI bleeding) or cardiovascular (eg, the addition of aspirin to reduce the risk of MI or stroke) outcomes. In the case of opioids, patients must routinely be started on bowel stimulants or other laxatives to avoid the expected side effect of constipation.
Even without taking into account these challenges, many medications used to address symptoms have not been evaluated in older adults or in adults with multimorbidity. Among the symptom-focused studies done in older adults with advanced illness or multimorbidity, the most evidence exists for management of pain, and to a much lesser degree, dyspnea in cancer. There is a dearth of evidence for how to manage symptoms in noncancer illnesses, let alone in patients with several diseases. Therefore, the true benefits and the true risks are largely unknown. For this reason, pharmacologic management of symptoms warrants thoughtful discussions with the patient and/or family caregivers, as well as cautious initiation and careful monitoring of both the positive and negative effects of the treatment. For many older adults, symptom management involves both relief of discomfort and development of new drug-induced challenges. Anticipatory and ongoing discussion of the benefits and burdens of pharmacologic symptom management in the context of the patient’s goals and preferences offers an ongoing person-centered approach to improving the older person’s QoL.
The Role of Patient Preference in Symptom Management
Given the lack of evidence for optimal symptom management for older adults in general and in particular for older adults with multimorbidity, treatment
strategies for symptom management are preference sensitive—that is there is often more than one reasonable treatment option, or a particular option offers uncertain benefit. For preference-sensitive decisions, the clinician must understand what is most important to the patient to determine what might be the best treatment option. A starting point may involve asking patients to prioritize a set of universal health outcomes that can be applied across individual diseases. Typical outcomes would include living as long as possible, maintaining function, staying cognitively intact, and alleviating pain and other symptoms. Other outcomes may include staying out of the hospital or dying at home. While not always mutually exclusive, understanding how patients prioritize these various outcomes can help guide decisions about symptom management. This broad-based approach provides a more unified starting point, rather than asking patients or caregivers to make specific treatment decisions.
Optimal decisions regarding symptom management require that the patient is adequately informed about the expected benefits and harms of different treatment options for their symptoms, recognizing that in many instances optimal information is lacking. Actual decision-making preferences vary widely by patients and family caregivers. Some individuals prefer to make the decision themselves, while others prefer that the decision for a specific treatment be made by the clinician. In either instance, most individuals want their opinion used to guide the decision-making process.
Then, the clinician can develop a management plan and evaluate at regular intervals whether the management plan remains concordant with patient’s wishes. Reevaluation of treatment plans is critical as studies suggest that older adults with multimorbidity engage in dynamic reassessments of their conditions, as they shift between experiencing disruption from the condition and finding the ability to adapt to the condition’s challenges.
The following case illustrates these decision-making issues. Consider an 86-year-old man with advanced COPD who presents to the emergency department acutely dyspneic. He wants to live a few more days to say goodbye to family members and friends, even if it means living in the hospital. He does not want to be intubated or sent to the intensive care unit (ICU). He agrees to try BiPAP while initiating diureses. He expresses immediate profound relief of dyspnea upon securing the mask. However, the next morning, he asks that the mask be removed as it seems suffocating. He has had the opportunity to talk to his family and the distress associated with
the BiPAP mask no longer makes it worthwhile. The patient’s clarity around what is important to him can guide the clinician in these preference-sensitive treatment decisions.
SUMMARY
Experiencing multiple symptoms is common in older adults and particularly common in older adults with multimorbidity. For most symptoms, pharmacologic and nonpharmacologic approaches have some evidence base. Unfortunately, the evidence base is very limited for older adults, especially for those with multiple chronic conditions. Because the evidence base for symptom management is so sparse, treatments are preference sensitive and should take into account the values and preferences of older patients and their caregivers. Universal outcomes around function, cognition, comfort, and survival may be good starting points regarding which treatment options make the most sense. Ultimately more research will be needed to ascertain which treatments are the most beneficial for older adults and those with multimorbidity.
FURTHER READING
Eldadah BA. Fatigue and fatigability in older adults. PMR. 2010;2(5):406– 413.
Elnegaard S, Andersen RS, Pedersen AF, et al. Self-reported symptoms and healthcare seeking in the general population—exploring “The Symptom Iceberg”. BMC Public Health. 2015;15:685.
Glynn NW, Santanasto AJ, Simonsick EM, et al. The Pittsburgh Fatigability Scale for older adults: development and validation. J Am Geriatr Soc. 2015;63(1):130–135.
Hardy SE, Studenski SA. Fatigue predicts mortality among older adults. J Am Geriatr Soc. 2008;56(10):1910–1914.
Jones PW, Harding G, Berry P, Wiklund I, Chen W-H, Leidy NK. Development and first validation of the COPD assessment test. Eur Respir J. 2009;34:648–654.
Jones PW, Quirk FH, Baveystock CM. The St George’s respiratory questionnaire. Respir Med. 1991;85(suppl B): 25–31.
King DE, Xiang J, Pilkerton CS. Multimorbidity trends in United States adults, 1988-2014. J Am Board Fam Med. 2018;31(4):503–513
Lehti TE, Öhman H, Knuutila M, et al. Symptom burden is associated with psychological wellbeing and mortality in older adults. J Nutr Health Aging. 2021;25(3):330–334.
Mahmoud AM, Biello F, Maggiora PM, et al. A randomized clinical study on the impact of Comprehensive Geriatric Assessment (CGA) based interventions on the quality of life of elderly, frail, onco-hematologic patients candidate to anticancer therapy: protocol of the ONCO-Aging study. BMC Geriatr. 2021;21(1):320.
Martinez-Amezcua P, Simonsick EM, Wanigatunga AA, et al. Association between adiposity and perceived physical fatigability in mid- to late life. Obesity (Silver Spring). 2019;27(7):1177–1183.
Mendoza TR, Wang XS, Cleeland CS, et al. The rapid assessment of fatigue severity in cancer patients—use of the Brief Fatigue Inventory. Cancer. 1999;85:1186–1196.
Mitchell GK, Hardy JR, Nikles CJ, et al. The effect of methylphenidate on fatigue in advanced cancer: an aggregated N-of-1 trial. J Pain Symptom Manage. 2015;50(3): 289–296.
Morris RL, Sanders C, Kennedy AP, Rogers A. Shifting priorities in multimorbidity: a longitudinal qualitative study of patient’s prioritization of multiple conditions. Chronic Illn. 2011;7(2):147–161.
Nava S, Ferrer M, Esquinas A, et al. Palliative use of non-invasive ventilation in end-of-life patients with solid tumours: a randomised feasibility trial. Lancet Oncol. 2013;14(3):219–227.
Patel KV, Guralnik JM, Phelan EA, et al. Symptom burden among community-dwelling older adults in the United States. J Am Geriatr Soc. 2019;67(2):223–231.
Portenoy RK, Thaler HT, Kornblith AB, et al. The Memorial Symptom Assessment Scale: an instrument for the evaluation of symptom prevalence, characteristics and distress. Eur J Cancer. 1994;30A:1326– 1336.
Ritchie CS, Hearld KR, Gross A, et al. Measuring symptoms in community- dwelling older adults: the psychometric properties of a brief symptom screen. Med Care. 2013;51(10):949–955.
Smith MEB, Nelson HD, Haney E, et al. Diagnosis and Treatment of Myalgic Encephalomyelitis/Chronic Fatigue Syndrome. Evidence
Report/Technology Assessment No. 219. AHRQ Publication No. 15-
E001-EF. Rockville, MD: Agency for Healthcare Research and Quality; December 2014. Addendum July 2016. www.effectivehealthcare.ahrq.gov/reports/final.cfm. Accessed January 3, 2022.
Chapter
70
Palliative Care Across Care Settings
Lisa Cooper, Laura Frain, Nelia Jain
INTRODUCTION
Consider Mrs. M, an 85-year old woman with congestive heart failure, mild cognitive impairment, and multijoint osteoarthritis, who lives alone in a second-floor, walk-up apartment. Mrs. M retired from her part-time secretarial work at the age of 70 to care for her husband with Lewy body dementia and has been widowed for the past 5 years. Mrs. M recently agreed to try ambulating with a walker after her third trip to the emergency room for falls but frequently forgets to use it. Her daughter, an only child, lives in the same city and helps her mother shop and clean. In the past, she accompanied her mother to medical appointments but has been unable to do so regularly in the past 2 years due to her work schedule and helping care for her grandchildren. Although Mrs. M never missed medical appointments in the past, she now has “no-showed” to most visits including for follow-up after being evaluated in the emergency room. Her daughter receives a call from a concerned neighbor who reports that her mother only rarely comes out of her apartment, and when she does, seems confused and unsteady. Her daughter has had similar concerns, and additionally worries that her mom appears to be in pain most days, depressed, and losing weight. She takes a day off from work to bring her mom to see her primary care doctor; but, prior to the appointment, she gets a call from the emergency room that her mother fell again resulting in a broken hip requiring surgery. Mrs. M’s postoperative course is complicated by delirium and pain. She is transferred to a skilled nursing facility for rehabilitation where the team raises the concern that Mrs. M now has moderate dementia, frailty, and significant gait impairment. They recommend a more supportive living environment. Mrs. M moves in with her
daughter and enrolls in a home-based primary care program. Initially, Mrs. M does well, with notable improvements in her mood, weight, walking, pain control, and cognitive health. However, over the next 2 years, her dementia progresses with increasingly severe behavioral and psychological symptoms and rising care needs. She becomes nearly wheelchair bound due to pain when walking. With limited support for her mom at home and now facing her own medical issues, her daughter admits Mrs. M to a nursing home. Mrs. M is hospitalized three times over the next year. She enrolls in hospice after the third hospitalization and survives another year.
Learning Objectives
Conceptualize the ideal model of palliative care provision for the aging population facing multimorbidity and progressive functional impairment across care settings.
Identify existing deficiencies and barriers to the adequate delivery of palliative care for older adults across the care continuum.
Key Clinical Points
The palliative care needs of older adults differ from younger populations due to differences in illness trajectories, treatment preferences, and patterns of health care utilization.
Although the highest proportion of palliative care needs for older adults exists in the community setting, access to palliative care is concentrated in acute care settings and through hospice utilization.
Older adults often experience advancing frailty and multiple complex conditions, necessitating an integrated approach between geriatrics and palliative care specialties to meet the needs of this population across their trajectory of functional decline and increasing needs for care and support.
Illustrate how effective collaboration between geriatrics and palliative care services with integration of palliative care interventions into existing models of care can help bridge these gaps.
The case of Mrs M illustrates the need for a truly integrated system of palliative and geriatric care that meets the medical and social needs of older adults across the care continuum. Palliative care aims to improve the quality of life for persons with serious and advanced illness by decreasing symptom burden, addressing psychological and spiritual distress, and promoting well- being of patients and their families. The types of care that would help support Mrs. M’s quality of life vary as her health circumstances and care settings change. These include, but are by no means limited to, (1) advance care planning conversations to delineate the patient’s goals, values, and preferences and ensure goal-concordant care; (2) assessment and treatment of postoperative pain and delirium with heightened monitoring for postoperative cognitive and functional decline; (3) an interprofessional team that can provide longitudinal care for the patient across various settings; (4) caregiver training and support for the patient’s daughter; and (5) functional assessment and home adjustments to address needs for adaptive devices and therapy services.
Unfortunately, in reality, our current health care system is fragmented with a disproportionate focus on acute care and immediate post-acute care, often leading to a lack of emphasis on matching treatments to patient goals. In this chapter, we will describe the ideal of a palliative care system for older adults that delivers integrated care across the care continuum, including hospice. We will contrast this ideal with the barriers to and lack of an integrated system for delivery of palliative care across the continuum. We will identify opportunities for improvement. We begin by discussing the tremendous overlap between the fields of geriatrics and palliative care.
GERIATRICS AND PALLIATIVE CARE ARE BETTER TOGETHER
The case of Mrs. M illustrates the blurred boundaries between geriatrics and palliative care (Figure 70-1). Geriatrics and palliative care both emphasize improving quality of life in late life. Both fields require excellence in management of persons with multimorbidity, dementia, and disability. Both fields emphasize attention not just to the patient, but also the patient’s caregiver, and the situation of the patient within the context of family and community. Both fields are directed primarily toward the sickest and most
frail older adults, those that account for about half of health care spending in the United States.
FIGURE 70-1. Intersection of palliative care and geriatrics.
By 2030, the number of Americans aged 65 or older is predicted to approach 73 million, representing 21% of the total US population. Eighty percent of all older adults have at least one chronic condition and 50% have a least two or more chronic conditions. Notably among these, the number of Americans living with Alzheimer disease is anticipated to double from 5.8 million, currently, to 13.8 million by 2050. It is also increasingly recognized that a substantial proportion of community-dwelling older adults are frail (~ 15%) and prefrail (45%). Like dementia, frailty is even more common at older ages, among women and racial and ethnic minorities, and with additional variability in prevalence based on geographic region, location of residence, and socioeconomic status.
In the setting of frailty and multiple chronic illnesses, older adults commonly experience long trajectories of functional decline and episodic fluctuations in health status. They frequently face repeated, nonlinear, and often difficult transitions across care settings, including the community, hospital, rehabilitation, and nursing facilities, and ultimately experience disjointed care. Older adults’ family and physicians often have a poor understanding of their goals of care and have rarely engaged in discussions regarding treatment preferences.
Spurred by the needs of the aging population, there is increasing recognition that palliative care does not equate to “comfort care only” for the terminally ill. The recognition that geriatric palliative care approaches can have a major impact on symptom management, communication around goals of care, management of multiple chronic conditions, and caregiver support for all patients with advanced illness and frailty from the time of diagnosis is an invaluable step toward improving the quality of life for older adults.
The fields of geriatrics and palliative care have many common grounds, including providing goal-oriented care, working within an interprofessional team, using multidimensional assessments to identify unmet needs, attending to psychosocial factors, addressing caregivers’ needs and including them in care planning and delivering, and adding valuable services to the most vulnerable and frail older adults. On the other hand, these intersections between geriatric medicine and palliative care give rise to unclear boundaries and limited understanding between the two professions. In addition, there are several gaps between the two fields. Palliative care contributes to patient care by offering intensive symptom management and psychosocial support; prognostication; alignment of goals and treatments between patients, families, and clinicians; and support in ethical decision making. Geriatric medicine offers a comprehensive geriatric assessment, including a better understanding of frailty and resilience, functional status, and expected trajectory. As such, the two specialties best serve patients when applied in a blended, interconnected way to deliver integrated patient care rather than in series (geriatrics, then palliative care). These care models can be implemented in different settings and best serve mutual patients as each specialty brings a unique and important aspect to the patient’s care, with an ability to intensify or de-intensify a particular service’s involvement depending on the patient’s health state.
By simultaneously advocating for the holistic care of older adults, recognizing the specialized knowledge base and skills possessed by each discipline, and ensuring competency of future trainees in both geriatrics and palliative care, clinicians from both fields will be able to work together to target the sickest 5% of our population at highest risk, and match these patients with delivery models best suited to address their needs.
THE IDEAL OF A CARE CONTINUUM
The ideal of a continuum of care has been defined as, “a client-oriented system composed of both services and integrating mechanisms that guides and tracks clients over time through a comprehensive array of health, mental health, and social services spanning all levels of intensity of care.” The introductory chapter in this section on palliative care introduced the concept that palliative care is appropriately initiated from the time of diagnosis with serious or advanced illness. For many, though not all, serious and advanced illnesses are diagnosed in the outpatient setting. The ideal care continuum would integrate palliative care into the settings where patients spend the most time after diagnosis, that is, in their homes. This would involve access to robust palliative care services along with integration of basic palliative care principles in ambulatory and home-based settings.
As patients develop more serious conditions, such as advancing frailty and dementia in the case of Mrs. M, they may require more care in acute institutional settings like skilled nursing facilities, hospitals, and nursing homes. The ideal continuum of care would track patients as they move between settings, coordinate care across providers, and offer comprehensive services as people move between the community and the inpatient setting. As a patient’s goals increasingly shift toward a focus on quality of life, high- quality palliative care services should take on a greater role in care, and be available in all settings.
THE REALITY OF PALLIATIVE CARE FOR OLDER ADULTS
Palliative care has been noted to improve quality of life and symptom management for patients, increase satisfaction of patients and their family members, and reduce health care costs. However, in the current health care system, palliative care is most readily made available to patients during times of crisis (acute inpatient hospitalization) or at the end of life (hospice). The majority of older adults’ last months and years are still spent in the community, with institutionalization occurring only in the very late stages of life. There has been less progress noted in the area of nonhospice palliative care in nursing homes or the community setting, including assisted living and home-based programs, suggesting that the current level of palliative care services does not adequately address the needs of older adults.
Palliative care needs for older adults with advancing frailty differ from the needs of younger adults. Older adults tend to have prolonged illness duration with numerous chronic and complex medical conditions. Older patients experience greater functional and cognitive decline, and they also have increased caregiving needs for longer periods of time. Less is known about how older adults and their caregivers cope with the stressors of chronic illness compared to younger adults with sudden changes in health such as a new cancer diagnosis. Additionally, due to exclusion of older adults with multimorbidity from many research studies, there is a paucity of evidence to guide symptom management in this population. While older adults typically die of chronic, slowly progressive illnesses with multiple acute exacerbations, current Medicare coverage is targeted to the acute episodes of illness. This has resulted in palliative care that has been implemented in a reactionary fashion to these acute episodes. This approach insufficiently addresses patients’ longer-term needs and creates missed opportunities for palliation along the disease trajectory (Figure 70-2).
FIGURE 70-2. Gaps in palliative care services for older adults.
In addition, frailty increases with age. While there are many different ways to define and capture frailty, they all are based on multidimensional vulnerabilities that cause susceptibility to stressors and reduced adaptive capacity and resilience to these acute events. Since increasing levels of frailty can present with gradual functional and cognitive decline, the trajectory might be less clear than single disease-based clinical evaluation.
This also means that palliative care and supportive care might be needed for a longer period of time. Currently, there is no indicator of frailty in hospice- eligible criteria since the removal of “adult failure to thrive” and “debility” as principle diagnoses for hospice eligibility in 2014 despite the Centers of Medicare and Medicaid Services 2009 report indicating a marked increase in the use of these diagnoses.
Better defining frailty and its expected trajectory, together with periodic measures and assessment of increasing health care needs, might assist in engaging palliative care approaches earlier and help in understanding poor outcomes from medical interventions. One area that can be addressed with increasing levels of frailty is polypharmacy and deprescribing, which has been shown to be beneficial in frail older adults, especially when the focus becomes more on quality of life and symptom control and avoiding side effects and medications that are unlikely to contribute any meaningful effect. Palliative care approaches are needed when patients and family are facing advancing frailty; however, due to the complexity of definition and wide range of presentation, these needs are often unmet. A true collaboration between geriatrics and palliative care experts can address this increasing gap.
In order to adequately address the needs of older adults, palliative care must be instituted in a timely manner, earlier in the course of a patient’s illness. Upstream involvement of palliative care in the care of older adults, particularly while they are still residing in the community, allows for increased opportunities for shared decision making, constant evaluation of a patient’s prognosis and symptom burden, and longitudinal care throughout different care settings. In addition to routine palliative care, clinicians have the opportunity to work with patients’ primary care physicians, geriatricians, and specialists to emphasize function, financial planning, and maintenance of social roles within a community. This collaboration is even more integral when caring for older adults who often experience multiple concurrent progressive conditions that may challenge the formulation of prognostic estimates in terms of time and function. Effective palliative care for older adults will require moving away from a focus on single disease trajectory toward understanding and assessment of patient’s reserves and vulnerabilities in addition to their values and treatment preferences to guide medical decision making. This foundation will aid in easing transitions to
higher levels of care and help patients avoid inappropriate or burdensome treatments.
PALLIATIVE CARE IN THE ACUTE CARE SETTING
Of all settings, palliative care services in the acute inpatient setting have enjoyed the most successful implementation and advancement in the last 20 years. In the inpatient setting, palliative care teams are interprofessional, comprised of a mix of physicians, social workers, nurses, and chaplains.
These teams offer expertise in symptom management, support to medical teams as well as patients and families, knowledge about community-based palliative care services and hospice organizations, and assistance in prognostication over a variety of illnesses.
Given that the majority of patients with advanced or serious illness will spend some time in the hospital throughout the course of their illness, and that up to 50% of adult deaths occur in the hospital setting, ensuring adequate palliative care services in the inpatient setting is of particular importance.
There is increasing evidence that inpatient palliative care interventions improve the quality of clinical care, increase patient and family satisfaction, improve transitions of patients out of the hospital, and reduce health care costs. Patients who receive inpatient palliative care consultations incur less laboratory and radiology costs, are less likely to be admitted to the intensive care unit (ICU), and spend less days in the ICU once admitted compared to patients receiving usual care, particularly among patients who die during hospitalization.
Older adults receiving palliative care consultation differ in characteristics and interventions compared to younger patients. Referral for palliative care consultation for older adults is more likely to be requested for disease processes other than cancer and for goals of care discussions rather than symptom management. Older adults tend to be referred to palliative care earlier in their hospital courses and have documented code statuses of “do- not-resuscitate” at time of consult. In the oldest subset of the population (80 and older), patients were more likely to be referred to palliative care for diagnosis of dementia, less likely to be included in discussions around goals of care due to lack of decisional capacity, and more likely to be discharged to nursing homes. In reviewing palliative care interventions offered by age, there are fewer interventions for pain or symptom management offered in older adults and an increased likelihood of limits placed on life-sustaining
treatments after consultation. Given these differences, inpatient palliative care teams should be prepared to meet the unique palliative care needs of older adults.
Despite the advancements that have been made in inpatient palliative care, as well as the positive patient and family outcomes that have been observed, challenges remain in the delivery of palliative care in the inpatient setting. Inpatient palliative care teams continue to struggle with late requests for consultation. In addition, constant turnover of inpatient treatment teams, hesitation to move away from a curative focus, and varying levels of knowledge about and alignment with palliative care from referring clinicians pose ongoing barriers to optimal palliative care provision. Due to the acuity and complexity of medical issues present in the hospitalized patient population, it is often difficult for palliative care teams to address all of the needs of these patients during the acute hospitalization, particularly when caring for older adults with complex comorbidities and multifaceted care needs.
In order to address deficiencies that remain in the delivery of palliative care in the inpatient setting, there are opportunities to consider upstream involvement of palliative care as well as integration of palliative care services within existing models of care in the inpatient setting. Rates of presentation to the emergency department (ED) increase with age, and the percentages of patients requiring hospital admission are highest in older adults. As the population continues to age, there will be a rising number of seriously ill older adults who will present to the ED. These needs may be met by identification of ED palliative care “champions” such as nurse managers and care facilitators, routine use of screening tools to evaluate the need for palliative care assessment in this population, development of care pathways as a product of collaboration between ED and palliative care teams, availability of palliative care consultative services to the ED, and familiarity with community palliative care and hospice resources among ED teams.
An organized approach to the identification, assessment, and management of older adults’ complex medical, social, psychological, and functional concerns and vulnerabilities is key. For patients admitted to inpatient hospital floors, utilization of such a systematic approach by an interprofessional team will ensure the needs of older adults are met. As previously discussed, older adults receiving palliative care consultation are
more likely to have nononcologic diagnoses, which are associated with less predictable hospital courses and illness trajectories. As the hospital course unfolds, communication interventions should target the status of the patient’s overall condition, estimated prognosis in terms of functional recovery and life expectancy, and available treatment options in the context of associated risks and likelihood of benefit. Clinicians with an understanding of basic palliative care assessment may be successful in appropriately identifying palliative care needs on day of admission, engaging palliative care teams upstream in the hospital stay of seriously ill older adults, monitoring progress on clinical improvement and achievement of patient’s goals, and frequently reevaluating to identify new palliative care needs in the older adult population as they arise throughout the hospitalization (Figure 70-3A and 3B).
FIGURE 70-3 A. Inpatient criteria: guide for primary teams to prompt palliative care consultation on day of admission. B. Inpatient criteria: guide for primary teams to prompt palliative care consultation on subsequent days of hospital admission.
Emerging opportunities for early introduction of palliative care models include programs such as geriatric comanagement with orthopedic surgery and general surgery, with the geriatric-surgery verification program from the American College of Surgeons being adopted by an increasing numbers of
medical centers, and with emerging geriatric-ED initiatives. Such programs advocate for early collaboration with geriatric medicine clinicians, who perform an early frailty, cognitive and functional assessment, and assist the team in tailoring treatment plans to specific characteristics and needs of the older patient. Within these models, comanagement with palliative care teams may be considered for patients in instances such as facilitation of complex communication and medical decision making, reevaluation of goals of care based on changes in the patient’s clinical trajectory, or easing end of life transitions.
The ICU is another location in the inpatient setting with the most opportunities for successful delivery of palliative care to older adults. Older adults admitted to the ICU have high mortality rates during the course of their hospitalization, and older ICU survivors have elevated palliative care needs for ongoing physical and/or psychological distress. They also remain at increased risk for rehospitalization and 6-month mortality. Studies have demonstrated that patients, families, and clinicians may possess unrealistic expectations about prognosis and effectiveness of ICU interventions.
Critically ill patients and their caregivers often seek more honest and open discussions regarding prognosis and appropriate therapies based on patients’ values and goals. ICU clinicians should all receive training in core skills of primary palliative care, including serious-illness communication skills and symptom management. Palliative care “bundles” have been suggested to improve communication and promote the comfort of critically ill patients (IPAL-ICU). Key quality measures in these bundles include early identification and documentation of surrogate decision makers and advance directives, regular symptom assessment, involvement of social work and spiritual care, and regular family meetings to update the patient and family on the status of the patient’s illness. By emphasizing compassionate communication, symptom management, and shared decision making, palliative care can be successfully integrated for critically ill patients at all stages of illness.
Across the inpatient setting, collaboration between palliative care teams and patients’ longitudinal outpatient clinicians, in particular geriatricians and/or primary care physicians, care managers or care transition specialists can bridge gaps in knowledge of patients’ values and treatment preferences, communication, and appropriate identification of community resources to best meet the patients’ needs for their current health state. Additionally, novel
pathways to meet the unique hospital-based considerations of older adults offer further opportunity for palliative care integration and comanagement, particularly for patients experiencing high symptom burden or for those patients and families benefiting from ongoing goal clarification and assistance in complex medical decision making.
PALLIATIVE CARE IN POST-ACUTE CARE SETTINGS
Many older adults with complex care needs are living and now also dying in the community. Older individuals of advancing age often live alone, particularly women, with estimates of over 50% for women ages 85 and older in 2018. Approximately 21% of these community-dwelling older adults, totaling approximately 7.5 million individuals, are partly or completely homebound, and tend to be older, female, ethnic and cultural minorities, and have lower income. Despite a high burden of chronic conditions, especially dementia, and reports of lower health status, these older adults struggle or are not able to access the ambulatory care needed for their complex medical, psychosocial, and neurologic needs, and have high rates of hospitalization, leading to increased fragmentation of care. Despite the cycle of recurrent fluctuations in health and transitions in care, older adults’ family (if present) and physicians often have limited understanding of their goals of care and care needs, prognosis, and have rarely engaged in discussions regarding treatment and care preferences. This is particularly problematic as family caregivers still provide the majority of direct care for US older adults with complex health conditions, and much of this care occurs in the community as opposed to nursing facilities. Additionally, growing evidence suggests that older adults living at home with dementia are more likely to have bothersome pain, dyspnea, and mood symptoms compared with people living in nursing or other residential facilities, with similar patterns seen among other vulnerable older adults in the community. Community- based palliative care services are currently available through both home- based and institutional-based models (Figure 70-4).
FIGURE 70-4. Palliative care delivery models in the community.
PALLIATIVE CARE IN THE CLINIC
For older adults whose functional status remains preserved or who have sufficient assistance to access care in the ambulatory setting, palliative care clinics facilitate the development of an early relationship between clinicians and patients, improve communication between primary care clinicians and other subspecialty clinicians, and increase satisfaction with care from the perspective of patients and their families. These clinics also serve as an appropriate setting for posthospital discharge follow-up for patients seen by inpatient palliative care teams.
There are a variety of existing models of palliative care clinics in our current health care system (Improving Outpatient Palliative Care [IPAL]). Some clinics are affiliated with particular specialties by which they generate all of their referrals. Other clinics are stand-alone clinics that function independently or share space and staffing with a primary care clinic or specialty clinic in a partially or completely embedded framework.
Additionally, clinics vary in the focus of their care, ranging from providing task-specific to symptom-specific palliative care. Palliative care clinicians will often choose between providing care in a consultative or comanagement fashion for the majority of patients, while some patients will be taken care of entirely by the palliative care specialist. Finally, some clinics have the capability to extend their services beyond the clinic and into the community, providing visits to clinic patients who become homebound or transfer to a facility.
Outpatient palliative care services have been associated with improved patient outcomes, such as significant symptom improvement and high satisfaction with services. Care provided in palliative care clinics has also been associated with better maintenance of performance status, increased
discussions about advance care planning and end-of-life care, and better long-term outcomes for caregivers. Patients receiving care in palliative care clinics in addition to usual care experience fewer hospital days and skilled nursing facility days, as well as increased hospice use. Clinicians also benefit, with increased rates of satisfaction observed in referring clinicians as well as palliative care clinicians.
Presently, the majority of outpatient palliative care clinics focus on care for patients with cancer diagnoses. As outpatient clinic services expand, particularly to nononcology populations, clinicians may face challenges, including the lack of standardization of the development process, need for adequate marketing, and insufficient staffing to accommodate anticipated growth of the clinics. Promising examples of outpatient palliative care for individuals with serious, life-limiting non–cancer-related diagnoses are emerging. A 2020 pragmatic trial of integrated palliative care at three academic tertiary centers for individuals with Parkinson disease and related disorders demonstrated better quality of life, improved symptom burden and motor symptoms, as well as higher rates and quality of advance care directives in the intervention group compared to those receiving standard of care. Additionally, the study was suggestive of a benefit on caregiver strain by 1 year. Although care at each site was provided in an ambulatory setting and by an interprofessional team using standardized checklists, the specific model of care varied by institution. Encouragingly, the impact on outcomes was equally effective across sites and suggests flexibility may exist in the design of integrated palliative care in the outpatient clinical setting if fidelity to key palliative care quality domains is maintained.
There are several limitations that impede the delivery of ideal palliative care in the clinic setting. Patients with severe and advanced illness, particularly those with chronic illnesses, are always at risk of declining to the point that they are unable to make their outpatient clinic appointments.
Unless clinics are associated with bridge programs designed to extend palliative care services to homes and facilities until patients qualify for hospice, provided the latter is an acceptable option to them, chronically ill patients with progressive functional and cognitive decline often experience a gap in the continuation of their palliative care services. In addition, clinics are often limited by their hours of operation and staffing, particularly lacking coverage by a health care provider 24 hours a day to manage any acute crises that may arise. Finally, given the heavy reliance on referrals from the
inpatient setting, primary care clinics, and subspecialty clinics, palliative care clinics require a certain amount of “buy-in” from other disciplines in order to continue to operate. As the population with chronic conditions continues to grow and age, increased attention to availability of palliative care clinics, reimbursement for palliative care services, and relationship building between hospitals, long-term care institutions, and community-based home care organizations is needed to ensure that this population of patients will be able to receive palliative care outside of the hospital setting.
PALLIATIVE CARE IN THE HOME
There are an increasing proportion of older adults living at home with chronic illness with intensive symptom management and advance care planning needs who do not qualify for hospice. As older adults become increasingly homebound, they are at increased risk for symptom and illness burden. Completely homebound patients have a 2-year mortality of approximately 40%, demonstrating a natural intersection of palliative care provision within home-based care. The ideal home-based palliative care model is comprised of an interdisciplinary team of physicians, nurses, social workers, chaplains, home health aides, and pharmacists (similar to the home hospice model) who are able to provide services in coordination with the primary care clinician and specialists. Services would include pain and nonpain management, medication management, discussions about health care decision making and goals of care, caregiver support, attention to spiritual concerns and quality of life, arrangement of respite care, and assistance with the transition to hospice if needed. Specifically tailored to the chronically ill, frail older adult, home-based palliative care would ideally integrate with home-based primary care or other community geriatric care models with an expanded focus that includes education and support for caregivers, timely access to medications, optimal use of applicable community resources, home safety and comfort, and 24/7 access to a health care clinician in order to ensure continuity of care and manage acute crises. A closely integrated model of home-based palliative care with home-based primary/geriatric care is especially beneficial for patients and families who are facing the strain of longer, often unpredictable, but ultimately terminal illness trajectories with challenging, day-to-day prognostication and frequent health exacerbations due to even minor physiologic stressors. This situation is commonly faced when caregivers and clinicians provide longitudinal care for patients with
frailty, dementia, and complex multimorbidity. While highly integrated palliative and primary care models existed in limited, most often capitated, health care settings, the most notable and long-standing being the VA Home- Based Primary Care program, new partnerships are now arising between other home-based primary/geriatric and palliative care programs in the context of the United States’ evolving alternative payment models and the growth of for-profit practices in home-based medical care. Home-based palliative care organizations would benefit from relationships with home- based primary care as well as institutions such as hospitals, primary care clinics, specialty clinics, nursing homes, and hospice agencies in order to increase the referral base and promote longitudinal care across the care continuum.
Studies on existing models of home-based palliative care have demonstrated beneficial outcomes. The patient populations receiving home- based palliative care include patients with both cancer and noncancer diagnoses (eg, dementia, cardiac disease, pulmonary disease, etc). Benefits noted include improved symptom scores in areas such as depression, dyspnea, and anxiety, decreased overall cost of care, increased completion of advance directives, and increased rates of dying at home. Fewer visits to the ED and lower rates of hospitalization have been observed, without a difference in overall survival. Rates of referral to hospice are higher than among control patients without home-based palliative care. Of the patients who do enroll in hospice, those who have received home-based palliative care prior to this transition are noted to have longer days of enrollment in hospice as compared to those with usual care.
Despite the positive impact home-based palliative care programs have on community dwelling patients with serious and advanced illnesses, the advancement of palliative care in the arena of home-based care has limitations. Home-based palliative care delivery is highly variable and delivered by diverse sources including hospitals and large health systems, community hospice and home health agencies, and individual practices or organizations. Programs face economic barriers such as insufficient reimbursement for services in proportion to time spent, particularly for professions other than physicians and nurses, and a lack of incentive to reduce costs. Organizational barriers exist as well, primarily driven by the lack of adequate staffing to meet the needs of homebound older adults, as well as the lack of recognition and training around quality standards and
measurement. At present, there is no standardization of home-based palliative care practices or institutional support, leading to difficulty in translating the beneficial outcomes that have been noted in existing home- based palliative care models to the broader population. While progress has been made in developing a quality framework that adequately represents the needs of the homebound population, there are ongoing efforts to address implementation, measurement, and reimbursement challenges. These standardized quality metrics have the potential to ensure that frail, cognitively impaired and functionally limited people experience improved quality of life while potentially reducing burdensome hospital visits and health care costs.
PALLIATIVE CARE IN LONG-TERM CARE FACILITIES
In adults older than the age of 65, approximately 25% of all deaths will occur in a nursing home, with projections of an increase to up to 40% of deaths of older Americans occurring in nursing homes by 2030. The majority of residents in long-term care facilities are older adults, experience progressive frailty and functional impairment, and require increased assistance in activities of daily living. For older adults in long-term care facilities, life expectancies are typically shorter in the order of years. With projections suggesting that the number of frail older adults may triple or quadruple in the next 30 years, closer examination is needed on the state of palliative care services in long-term care facilities. For many long-term care residents, hospice is the only form of specialty palliative care available.
What is known about palliative care services in long-term care facilities at present is limited. There are several studies that demonstrate long-term care residents suffer from uncontrolled symptoms, depression, existential suffering, and difficulty adjusting to the long-term care facility as their permanent place of residence. Older adults who do receive palliative care consultation have lower rates of end-of-life hospitalizations, with those who receive consultation further upstream in their illness experiencing lower overall rates of hospitalization and an almost 50% reduction in potentially burdensome transitions such as an emergency room visit within 30 days of death or admission to hospice within 3 days of death.
A number of significant barriers to providing adequate palliative care in long-term care facilities exist. Demographics of patients in long-term care facilities, particularly nursing homes, are changing, with increased numbers
of patients with chronic illness and multiple medical problems. A large percentage of these patients have noncancer diagnoses, leading to increased difficulty with prognostication and late referrals to hospice. Nursing home residents are often high utilizers of acute care settings, increasing the risk that patient’s values and treatment preferences are unknown or have not sufficiently been elucidated. These issues are further complicated by the fact that there is often high staff turnover at long-term care facilities, making it difficult to sustain palliative care educational efforts and ensure palliative care competency among staff.
With current reimbursement policies focusing on rehabilitation needs and sustenance, a seemingly high proportion of patients undergo aggressive restorative treatment such as intravenous hydration or insertion of feeding tube placement instead of shifting to aggressive palliative care or hospice care. In addition, short-stay residents in nursing homes who are paid for under the Medicare skilled nursing facility benefit following a hospitalization are only able to access the Medicare Hospice benefit if they revoke the skilled nursing facility benefit and pay for room and board out of pocket. These out-of-pocket costs are a major financial disincentive to short- stay residents to exercise the Medicare hospice benefit.
Guidelines for quality palliative care in nursing homes emphasize that palliative care does not require patients to forgo curative treatments nor relinquish subacute rehabilitation or hospitalization (National Consensus Project Guidelines for Quality Palliative Care). Three different models of delivering palliative care services in nursing homes have been proposed: (1) hospice partnerships, (2) external palliative care consultation services, and
(3) facility-based palliative care teams. Regardless of the approach taken, sustainability is key. Measures that are critical to sustainability of palliative care interventions in the long-term care facility setting include ongoing education efforts, “buy-in” of palliative care by nursing home staff and clinicians, increased availability of nurses and patient aides, and working relationships with hospitals and community organizations including hospice to ensure continuity of palliative care across all settings.
HOSPICE
Hospice care is the most well-known provider of palliative care services in the community. Hospice care is a philosophy of care in which patients facing a life-limiting illness are approached in a holistic manner, with a focus on
medical care and symptom management, as well as an interest in the emotional, spiritual, and future planning needs of the patient and family. There are now more than 4600 hospice organizations nationally, and these organizations serve over 1.5 million people. Of Medicare beneficiaries receiving hospice care in 2018, greater than 60% were older adults aged 65 and older. The greatest increase in hospice utilization was observed in patients aged 85 and older. Research on hospice care has demonstrated increased survival compared to usual care, increased patient and family satisfaction with end-of-life care, increased rates of patients dying in their location of choice, and decreased inappropriate health care resource utilization. Despite tremendous growth in the use of hospice services over the last 30 years, hospice care is still underutilized. A substantial number of patients who would prefer to die at home still die in the hospital. At present, the majority of patients receive hospice care at home. Nationally, hospice is not available in all long-term care facilities, and only 6% of residents in nursing homes elect the hospice benefit annually, far lower than one would expect given the short lengths of stay prior to death for many long-term care facility residents.
The Medicare hospice benefit requires that patients have a life expectancy of 6 months or less. An unfortunate but common misconception is that palliative care and hospice care are equivalent. This leads to missed opportunities for palliative care involvement in patients independent of prognosis, as well as late referrals to hospice care until clinicians are confident the patient is dying. This has translated to a median length of service in a hospice organization of 18 days. Over half of patients are enrolled in hospice for 30 days or less, of which 28% of patients die in the first 7 days. The extent of comprehensive palliative care delivered by hospice organizations also varies, with larger hospice organizations having the ability to provide more intensive services such as 24-hour physician staffing, ethics committees, and regularly using standardized assessment tools for pain and symptom management.
Cancer remains the most frequent principal hospice diagnosis of beneficiaries of hospice care. As discussed previously, the diseases affecting older adults are largely chronic in nature, with fluctuating trajectories marked by acute exacerbations and difficulties in prognostication, which in turn may reflect the timing with which clinicians refer older adults for hospice. To avoid the continued pattern of late referrals of older adults to
hospice until they are in an advanced terminal state, efforts will need to be shifted upstream, with increased emphasis on improved palliative care delivery across care settings, and increased communication and collaboration between primary care and/or specialty clinicians caring for older adults with specialists in palliative care and hospice. Access and utilization of hospice care by older adults will improve by expanding coverage of hospice services to patients with longer life expectancies and eliminating the need for patients to choose between hospice services and other home-care or institutional-level services. Newer programs such as the Medicare Care Choices Model are exploring opportunities to provide eligible beneficiaries access to hospice services while they continue to receive disease-modifying treatment. Thus far, participants in this program have demonstrated increased likelihood of hospice enrollment, decreased reliance on inpatient care, and positive caregiver experiences. By reconceptualizing the population for whom hospice care is offered away from eligibility based on prognosis, and instead, on patients and their caregivers with increasing vulnerabilities and need for care and in-home supports, older adults may be able to benefit from the supportive and holistic care provided by hospice agencies for longer periods of time.
CONCLUSION
The field of palliative care has enjoyed significant advancements over the years. Its rapid growth is evidenced by an increased number of clinicians choosing to train in the field and the increased availability of palliative care teams in a variety of care settings. The more widespread implementation of palliative care has led to improved quality of life and symptom management for patients; improved satisfaction among patients, families, and health care providers; and, decreased health care costs. However, the presence of adequate palliative care for older adults has lagged behind. Older adults have unique palliative care needs that are best met by providing longitudinal palliative care across the care continuum. Implementation of palliative care for older adults at the time of diagnosis of advanced or serious illness or at a time where they are noted to develop progressive frailty or cumulative burden from multiple comorbidities would be ideal. Increased involvement of palliative care with a focus on collaboration and comanagement with geriatricians and other longitudinal clinicians will help older adults reside longer in their communities, maintain their level of function, cope with care
transitions, and participate in shared decision making regarding their treatment preferences. The health care system will also need to change in order to train clinicians in primary palliative care skills, expand access to specialty palliative care particularly in the community, and increase coordination of care among institutions and community organizations.
Addressing these gaps in the current palliative care models will improve the quality of palliative care delivered to older adults across the care continuum.
FURTHER READING
Bandeen-Roche K, Seplaki CL, Huang J, et al. Frailty in older adults: a nationally representative profile in the United States. J Gerontol A Biol Sci Med Sci. 2015;70(11):1427–1434.
Bowman BA, Twohig JS, Meier DE. Overcoming barriers to growth in home-based palliative. Care. J Palliat Med. 2019;22(4):408–412.
Bryant EA, Tulebaev S, Castillo-Angeles M, et al. Frailty identification and care pathway: an interdisciplinary approach to care for older trauma patients. J Am Coll Surg. 2019;228(6):852–859.e1.
Chen CY, Thorsteinsdottir B, Cha SS, et al. Health care outcomes and advance care planning in older adults who receive home-based palliative care: a pilot cohort study. J Palliat Med. 2015;18(1):38–44.
Cooper L, Abbett SK, Feng A, et al. Launching a Geriatric Surgery Center: recommendations from the Society for Perioperative Assessment and Quality Improvement. J Am Geriatr Soc. 2020;68(9):1941–1946.
Hamaker ME, van den Bos F, Rostoft S. Frailty and palliative care. BMJ Support Palliat Care. 2020;10(3):262–264.
Harrison KL, Ritchie CS, Patel K, et al. Care settings and clinical characteristics of older adults with moderately severe dementia. J Am Geriatr Soc. 2019;67:1907–1912.
Kamal AH, Currow DC, Ritchie CS, Bull J, Abernethy AP. Community- based palliative care: the natural evolution for palliative care delivery in the U.S. J Pain Symptom Manage. 2013;46(2):254–264.
Kluger BM, Miyaski J, Katz M, et al. Comparison of integrated outpatient palliative care with standard care in patients with Parkinson disease and related disorders: a randomized clinical trial. JAMA Neurol. 2020;77(5): 551–560.
Leff B, Carlson CM, Sailiba D, Ritchie C. The invisible homebound: setting quality-of-care standards for home-based primary and palliative care.
Health Affairs. 2015;34(1):21–29.
Miller SC, Lima JC, Intrator O, Martin E, Bull J, Hanson LC. Palliative care consultations in nursing homes and reductions in acute care use and potentially burdensome end-of-life transitions. J Am Geriatr Soc.
2016;64(11):2280–2287.
Nelson JE, Curtis JR, Mulkerin C, et al. Choosing and using screening criteria for palliative care consultation in the ICU: a report from the Improving Palliative Care in the ICU (IPAL-ICU) Advisory Board. Crit Care Med. 2013;41(10):2318–2327.
Olden AM, Holloway R, Ladwig S, Quill TE, Van Wijngaarden E. Palliative care needs and symptom patterns of hospitalized elders referred for consultation. J Pain Symptom Manage. 2011;42(3):410–418.
Ornstein KA, Leff B, Covinsky KE, et al. Epidemiology of the homebound population in the United States. JAMA Intern Med. 2015;175(7):1180– 1186.
Pacala JT. Is palliative care the “new” geriatrics? Wrong question—we’re better together. J Am Geriatr Soc. 2014;62(10):1968–1970.
Ritchie CS, Leff B. Population health and tailored medical care in the home: the roles of home-based primary care and home-based palliative care. J Pain Symptom Manage. 2018;55(3):1041–1046.
Smith AK, Thai JN, Bakitas MA, et al. The diverse landscape of palliative care clinics. J Palliat Med. 2013;16(6):661–668.
Visser R, Borgstrom E, Holti R. The overlap between geriatric medicine and palliative care: a scoping literature review. J Appl Gerontol.
2021;40(4):355–364.
Voumard R, Rubli Truchard E, Benaroyo L, Borasio GD, Büla C, Jox RJ. Geriatric palliative care: a view of its concept, challenges and strategies. BMC Geriatr. 2018;18(1):220.
Zimbroff RM, Ritchie CS, Leff B, Sheehan OC. Home-based primary and palliative care in the Medicaid program: systematic review of the literature. J Am Geriatr Soc. 2021;69(1):245–254.
Chapter
71
Effective Communication Strategies For Patients with Serious Illness
Brook Calton, Matthew L. Russell
INTRODUCTION
Older adults are living longer, and with more chronic diseases, than ever before. The number of Americans ages 65 and older is projected to double from 52 million in 2018 to 95 million by 2060. The average US life expectancy increased from 68 years in 1950 to 79 years in 2017.
Approximately 80% of older adults have at least one chronic medical condition; with 77% having at least two. In fact, the majority of hospice diagnoses are now noncancer related, such as congestive heart failure, chronic obstructive pulmonary failure, or dementia. Taken together, these data suggest effective communication around advanced illness is of ever- increasing importance for our older adults facing longer life trajectories, complex chronic illness, and debility and decline. Having achieved longevity, many older adults living with advanced illness prioritize goals such as function, comfort, and family support. Skillful communication is needed to ensure the treatments we offer align with these goals.
Effective communication with patients and families can have a number of benefits. It has been shown to improve diagnostic accuracy, health outcomes, treatment adherence, and patient satisfaction. In addition, effective end-of- life communication has been associated with decreased intensity of care, increased quality of life, and improved quality of dying for patients.
Studies consistently confirm the majority of patients facing advanced illness desire to have honest conversations about their goals, values, and end-of-life care with their providers. A recent systematic review of advance
care planning in older adults found between 61% and 91% of older individuals wanted to discuss their end-of-life care with their providers. Benefits cited by patients included assurance their wishes would be respected, an opportunity to address important medical care and treatment issues before cognitive impairment or being physically unwell occurred, and to assist loved ones with decision making. They felt this responsibility to begin the discussion lies with physicians, they wanted to have the conversation in an open and honest manner, and preferred these conversations began early in their illness course.
However, despite the fact that communication around advanced illness is not only beneficial but also desired by most patients, this communication often remains inadequate. In studies, only an estimated 2% to 29% of frail older adults have discussed some form of end-of-life plans with a health care professional. In general, these discussions are often dominated by the clinician—in one study of 60 recorded patient-internist advance directive discussions, physicians spent 70% of the time talking and initiated more detailed discussions about patients values and goals less than 33% of the time. There are a number of challenges to effective communication that may contribute to the gaps described above. Potential barriers and suggested solutions are presented in Table 71-1.
TABLE 71-1 ■ PROVIDER BARRIERS AND SUGGESTED SOLUTIONS TO ENHANCE COMMUNICATION AROUND ADVANCED ILLNESS
Learning Objectives
Describe a suggested six-step process for addressing goals of care and treatment preferences with patients who are seriously ill.
Identify unique challenges to communicating with seriously ill older adults and strategies to address them.
Summarize key communication techniques including ASK-TELL-ASK and SPIKES.
In this chapter, we will provide an approach as well as key skills to prepare providers for having important conversations about goals of care and advance care planning with patients and families. We will also discuss unique challenges and situations specific to communication with older adults.
SUGGESTED APPROACH TO COMMUNICATING WITH PATIENTS WITH ADVANCED ILLNESS
Communication around advanced illness can take many forms. Potential conversations may include, but are not limited to, providing prognostic information, discussing treatment preferences, engaging in advance care planning, and transitioning to comfort care and/or hospice. It is important to remember, particularly in the outpatient setting, that any of these topics should be thought of as a process that occurs over time using open and honest communication that accounts for the patient’s personal desire for information.
It is likely that patients’ goals and preferences will shift over time as their illness progresses and/or what is important to them changes. Below we provide a suggested six-step approach that can be adapted to any of the conversations described above.
Step 1: Prepare
Key Clinical Points
Data suggest patients and families generally desire to have open conversations with their medical providers around their goals of care, treatment preferences, and end-of-life wishes.
Conversations around goals, values, and treatment preferences should be held over time, as a patient’s health status and/or perspectives evolve.
Clarifying patients, and families, understanding as well as effectively addressing emotions around serious illness can help make these conversations easier.
In the outpatient setting, if possible, it is optimal to ensure an adequate amount of protected clinic time to have the conversation. Encourage the patient to bring family or caregivers with them that they would like to hear and/or participate in the conversation. Clarify medical treatment options and prognostic information with other providers engaged in the patient’s care prior to the visit. In the inpatient setting, it is important to arrange for a private place to have the conversation and again, invite key family members, caregivers, and health care providers (eg, consulting MDs, bedside nurses, chaplains, etc) based on the patient’s preferences. Prior to the conversation, meet briefly with the participating health care providers (premeeting) to agree upon an agenda, prognosis, suggested treatment course, etc.
Step 2: Clarify Patient’s Understanding of Illness and Preferences for Information/Decision Making
Any conversation around advanced illness requires that the provider first assess the patient’s understanding of their illness. If the patient is unable to participate in the conversation, the same process should be undertaken with the patient’s family and/or designated medical decision maker, depending on the circumstances. This step, using open-ended questions, facilitates the provider’s ability to successfully provide education around the patient’s illness, discuss treatment options based on a shared understanding of the patient’s condition, and avoid potential misunderstandings (Table 71-2).
TABLE 71-2 ■ SUGGESTED LANGUAGE FOR ELICITING A PATIENT’S UNDERSTANDING OF THEIR ILLNESS
Next, the patient’s preferences for information and medical decision making should be clarified. This is critical because while most patients desire information about their illness and want to be involved in decision
making, a significant minority do not. Asking these questions (and then proceeding accordingly) will build the patient’s trust and empower them in these conversations (Table 71-3).
TABLE 71-3 ■ SUGGESTED LANGUAGE FOR ELICITING A PATIENT’S PREFERENCES FOR INFORMATION AND MEDICAL DECISION MAKING
Step 3: Education and Exploration
Step 3 involves patient education and exploration of patients’ hopes and values to facilitate medical decision making.
Once you have a sense of the amount of information the patient desires, the foundational communication skill ASK-TELL-ASK can be useful for providing patient education. You’ve already done the first “ASK” in step 2, by eliciting the patient’s understanding of their illness. The provider should next “TELL” the patient in straightforward language the information they are hoping to convey (eg, help them better understand their illness, treatment preferences, difficult news, etc) in short,
digestible chunks. The second “TELL” is an additional check for understanding or perspective. An easy way to ask is, “To make sure I was explaining things clearly, can you summarize what I just said?” This ASK-TELL-ASK cycle is often repeated several times during the same conversation, to ensure information is provided in a sensitive manner in digestible pieces.
Understanding a patient’s hopes and values can help the provider provide sound medical advice in line with what is most important to the patient.
A list of suggested questions is given in Table 71-4.
TABLE 71-4 ■ SUGGESTED LANGUAGE FOR EXPLORING HOPES AND VALUES
Step 4: Respond to Emotions
Providers often focus on the details of medical care and miss emotional cues. Responding to emotional cues is a key aspect of successful communication, building trust, and supporting patients. Clinicians can express empathy nonverbally and verbally. Eye contact, open body position, leaning in toward the patient, and when appropriate, light touch
can be useful tools for showing nonverbal empathy. An invaluable tool for expressing verbal empathy employs the mnemonic “NURSE.” Details of the mnemonic and sample language are provided in Table 71-5.
TABLE 71-5 ■ NURSE STATEMENTS AND SUGGESTED LANGUAGE
Step 5: Guide Decision Making
Every conversation is different and in many cases, exploring hopes, values, and emotions can be extremely beneficial, even without a specific medical decision to make. If there are decisions to make, the next step involves providing options to the patient that are consistent with what has
been found out is important to them. Providers do not need to identify every possible option, particularly those that are inconsistent with the patient’s expressed goals. Information should be given in small chunks and jargon should be avoided. Again, using Ask-Tell-Ask can allow for frequent stops to check for comprehension and provide clarifications or corrections if necessary. In many situations, based on what is understood about the patient’s decision-making preferences from step 2, providers can ask for permission to provide a recommendation based on the patient’s values and what is known about the medical circumstances.
Many patients and families may need time and multiple discussions before they can make big changes in care or decisions, for example, to enroll in hospice or decide whether to go into a nursing home. In practice, this often means not expecting a decision to be made at the end of one discussion. Suggested language is in Table 71-6.
TABLE 71-6 ■ PROVIDING A RECOMMENDATION, SUGGESTED LANGUAGE
Step 6: Summarize and Plan
To close the conversation, the provider should summarize any decisions that were made during the discussion, offer to answer questions, and arrange a plan for follow-up. This is also an opportune time to reaffirm nonabandonment regardless of the outcome of the conversation and the goals of care. In both the outpatient and inpatient setting, this is also an opportune time to thank the patient for having this difficult conversation and remind them you are hoping this process of exploration and decision making will be an ongoing one, as time goes on and their health evolves.
STRATEGIES TO ADDRESS COMMUNICATION CHALLENGES WITH OLDER ADULTS
Communicating with older adults can come with a unique set of challenges. Specific age-related issues (eg, hearing and/or vision loss, decline in memory, slower processing of information) alongside psychosocial factors related to aging (eg, loss of identity, lessening of power/influence of one’s life, and separation from family and friends) can impact communication.
Studies have found in general, providers spend less time and take more of a paternal approach with older patients. Older patients may withhold information about symptoms or conditions the patient considers “normal for their age,” such as pain—potentially hindering successful management of these important issues.
Hearing
Hearing loss is the third most common chronic condition reported by older adults.
Although older adults can compensate to a certain extent by devoting more cognitive effort to comprehension, this practice may limit the patient’s ability to encode information into long-term memory, draw interferences, and follow complex conversations. Tips for communicating with patients with hearing loss include the following:
Increase volume of speech slightly, speak a bit slower, and present information as clearly as possible.
Avoid shouting as speaking loudly raises the pitch of the voice, making it difficult for patients with hearing impairment to understand.
Keep good eye contact and sit directly facing the patient so the patient can supplement what they hear with lip reading. Keep lips at the patient’s face level.
Minimize use of the computer if possible.
Make sure the meeting space is as quiet as possible. Close the door to noisy hallways. Turn off background music and TVs.
Make use of pocket-talkers and encourage the patient to use their hearing aides, when available.
Memory Loss and Dementia
Working memory declines with age. Working memory is important for processing complex sentences (specifically, sentences with multiple imbedded clauses), which are common when discussing serious news, goals of care, and advance care planning with patients with advanced illness. To enhance comprehension, providers should attempt to break up individual pieces of information into separate sentences that could stand on their own. ASK-TELL-ASK (described earlier in this chapter) is a useful tool for confirming comprehension.
In cases where a patient suffers from dementia advanced enough to impair their ability to make their own medical decisions (or the patient lacks decision-making capacity for other reasons, such as delirium), effective communication with surrogate decision makers is critical. Important medical decisions by surrogates for patients with advanced dementia are common. In a study of 323 health care proxies for patients with advanced dementia, 40% recalled making at least one medical decision—the most common being around feeding (27%) and infections (21%). In a study of nursing home residents living with advanced dementia and in their last 3 months of life, surrogate decision makers who had an understanding of the patient’s poor prognosis and the expected clinical complications in advanced dementia were much less likely to elect burdensome interventions than surrogates who lacked this understanding. This finding demonstrates the need for providers to, over time, educate the family for what to expect in the advance stages of dementia and to foreshadow what decisions may arise in the future.
In situations where a medical decision does need to be made by a surrogate decision maker for a patient with advanced dementia (or a patient who otherwise lacks capacity), the following communication approach is recommended. First, background information should be obtained, including reviewing the patient’s advance directive with the surrogate decision maker, if available, and assessing what conversations the surrogate decision maker has had with the patient previously regarding their values and treatment preferences. This information may be sufficient to make some medical decisions, and in most situations, preferences on an advance directive should be honored.
When further discussion is needed, the next decision-making step relies on substituted judgment—asking what the patient would have wanted if he or she could tell us. Time spent with the surrogate decision maker reflecting on the patient’s values (eg, what brought the patient pleasure of life, what did
quality of life mean to them) can be invaluable in aiding with complex medical decisions. These include deceptively simple questions like, “Tell us about your mother.” We recommend, for this step, bringing the patient’s “voice” into the decision-making process; for example, “What would your mother choose if she could tell us?” or “Knowing your mother, what do you think would be most important to her right now?” An appeal to
substituted judgment may remove some of the burden, by framing the decision as the patient’s own choice rather than the surrogate’s.
When there is no reasonable basis on which to interpret how a patient would have made a medical decision, decisions should be made that are in the best interest of the patient. That is, the best path to promote the well- being of the patient as a unique person, in the context of their relationships, values, religion and spiritual beliefs, and culture. Providers should encourage surrogates to weigh the harms and benefit of various treatment options, including pain and suffering, the degree of and potential of benefit, and any impairments that will result from the treatment in question. Provider treatment recommendations, based on your understanding of the patient’s values, can be very helpful here—and are encouraged. When best interest standards apply, the question becomes, “What do you think is best for your mother?” Occasionally, conflicts between what providers and surrogates feel is best to do arise, including questioning whether the surrogate is truly making decisions in the patient’s best interest. Providers are encouraged to consult with their medical center’s administration regarding next steps in these difficult, ethical situations.
Engaging the Supportive Network
Older patients may have formal or informal caregivers as well as family and friends who form a part of their support network. Engaging patient-identified supports is key to promoting fruitful discussions. Family members and caregivers often accompany older adults to medical appointments. It is particularly important that caregivers (specifically, surrogate decision makers) be present for key conversations regarding an older adult’s goals of care so they too can understand the patient’s preferences. However, a common occurrence is that caregivers of older adults may unintentionally (or intentionally) fall into a pattern of “speaking for” the older patient.
Sometimes, but not always, this is due to difficulties in communication including hearing or vision problems or memory impairment.
Although it is sometimes necessary for a caregiver to provide supplemental information, it should not serve as a substitute for direct communication between the older adult patient and provider. Maintain direct eye contact with the patient, allow extra time for the patient to answer questions, and avoid speaking in the third person about the patient. To help establish the patient’s sense of autonomy and participation in their health care, continually attempt to redirect the conversation back to the patient if you find the caregiver is tending to talk over the patient.
Prognostic Uncertainty
As mentioned in the introduction, older adults are living longer, with more chronic conditions that affect their quality of life and prognosis. When it comes to prognosticating, this population can be more challenging to the provider than, for instance, patients with a more predictable, terminal illness such as metastatic cancer. Prognostic uncertainty may heighten providers’ hesitancy to discuss prognosis with these complex patients. However we know these conversations are important, and desired by patients. In one study of frail older patients with a mean age of 73, a life-limiting illness, and a need for assistance in at least one instrumental activity of daily living, over half of the patients whose physician had never discussed prognosis reported wanting to discuss it. While providers often consider prognosis conversations as important to medical decision making (including, but not limited to, chronic disease management, cancer screening, initiation of dialysis, and advance care planning), many older adults find this information helpful in determining life choices (eg, financial decisions, long-term care and housing, and quality-of-life considerations like spending time with family). Using prognostic indexes, many of which can be found on the user- friendly E-Prognosis website (www.eprognosis.com), can help inform your estimates alongside the clinical picture for your individual patient.
Additional techniques for approaching conversations where prognostic uncertainty is present are summarized in Table 71-7.
TABLE 71-7 ■ ADDRESSING PROGNOSTIC UNCERTAINTY
SPECIAL SITUATIONS
Discussing Difficult News
Difficult news is defined as any news that drastically and negatively alters the patient’s view of his/her future. Although we often associate discussing difficult news with the practice of oncology, all providers, regardless of specialty, are called upon to discuss difficult news with patients. Difficult news may relate to a diagnosis (eg, terminal, chronic, cancer, etc), prognosis, test results significant for disease progression, a bad outcome, or even medical mistakes. For older adults, difficult news may come in the form of a loss of independence, such as being deemed unsafe to continue driving a car or, requiring a higher level of care that will force the patient to move from their home of many years. It is important to remember that (1) most patients want to know difficult news (in studies, 75–90%) and (2) how a patient responds to news is influenced by a patient’s perspective, life experiences, occupation, personality and coping skills, religion, and social support. It is also important to recognize that a patient’s desire for medical information and a family’s desire to disclose such information (eg, the difficult news of a new cancer diagnosis) is influenced by a number of factors including, but not limited to, culture, ethnicity, religion, and past experiences with illness.
Asking specifically about your patient’s preferences regarding how much information they want to know about their health, diagnosis, and treatment options and also how they prefer to make medical decisions is key to providing person-centered medical care.
When difficult news is communicated in an effective manner, it can have an important impact on patient satisfaction and decreasing patient anxiety and depression. Discussing difficult news poorly can not only negatively impact patient outcomes, but can also have short-term (eg, anxiety) and long-term (eg, feelings of failure, regret, identification, higher rate of “burn-out”) effects on the provider. Challenges to discussing difficult news include providing information consistent with the patient’s understanding of their disease, concerns about taking away patient hope, and addressing emotion. A useful roadmap for discussing difficult news uses the acronym SPIKES (Table 71-8). The GUIDE framework from VitalTalk (www.vitaltalk.org) can also be useful.
TABLE 71-8 ■ SPIKES FOR DISCUSSING DIFFICULT NEWS
Tube Feeding in Advanced Dementia
Outcomes including death, aspiration pneumonia, functional status, and patient comfort are similar, if not better in some cases, for careful hand feeding compared to tube feeding in patients with advanced dementia (eg, bed-bound, unable to ambulate, and typically limited, if any, ability to communicate verbally alongside feeding difficulties, such as refusal of food, dysphagia, and/or recurrent aspiration). Additionally, the risk of hospitalization, ER visits, agitation, use of physical and chemical restraints, and incidence of pressure ulcers all appear to be higher with tube feeding as compared to careful hand feeding. Given this data, the American Geriatrics Society, in their “Choosing Wisely Campaign,” does not recommend percutaneous feeding tubes in patients with advanced dementia and recommends oral assisted feeding instead.
Despite data that feeding tube placement in advanced dementia may not provide benefit, and may even cause harm, one of the most common communication challenges providers caring for older adults face revolves around these decisions. Insufficient education of patients and surrogates on the potential risks of tube feeding likely contributes to this challenge as does the important social and symbolic role food plays in many cultures and religions.
One of the best strategies for effectively addressing this issue is to start the conversation early. In most cases, outpatient providers will have an
opportunity over months to years to observe their patient with advanced dementia’s swallowing ability and progressively decreasing oral intake. After ruling out reversible causes, providers should continually educate patients (if able to engage), family members, caregivers, and surrogates that feeding difficulties are a sign of progressive, advanced dementia and you anticipate they will worsen over time. ASK-TELL-ASK, summarized earlier in Step 3: Education and Exploration, can be a particularly useful skill here to assess what a patient or family understands and provided targeted education around the natural history of dementia and feeding difficulties.
Discussions regarding preferences for feeding support should begin early in the course of dementia, ideally when the patient’s cognition is intact enough to express their own preferences. These discussions should not be delayed until a crisis develops. Any decisions made with the patient should be documented in the medical record and an advance directive. This decision-making process includes aligning what patients and families are hoping for, with the realistic expectations and potential outcomes of tube feeding. Questions that can be particularly helpful, both early in the patient’s illness and in crisis situations (eg, recurrent hospitalizations for aspiration pneumonia in a patient with advanced dementia) include, “What do you hope a feeding tube would do for you (or your loved one)?,” “What worries do you have about feeding tubes?,” and “Have you seen any other patients with severe dementia with a feeding tube? What do you think that
experience was like for them?”
End-of-Life Preferences
As older adults with advanced illness reach the end of their life it is important to be sure that the patients’ wishes regarding resuscitation and ICU care are confirmed and documented. A recent study from Teno et al. found an evolving trend toward less in-hospital care, more deaths at home and in assisted living facilities, and fewer transitions of care among Medicare beneficiaries near the end of life. Equally if not more important is ensuring patients’ wishes, including desire for hospitalization, hopes for what the end of their life would look like, and preferred place of death are discussed.
Patient preferences should be revisited as their health status evolves as patient preferences for medical care change over time. When possible, it is critically important to ensure surrogate decision makers are present for these important conversations so they are aware of the patient’s wishes and
appreciate nuisances of patient’s preferences that may go uncaptured on advance care planning documents. Suggested language for exploring a patient’s end-of-life preferences, beyond resuscitation, is given as following:
“If you were close to the end of your life, what would be most important to you?”
“Do you have a sense of where you would prefer to be at the end of your life?”
“Some patients feel strongly they’d like to die at home, while others don’t have a preference. Do you?”
CONCLUSION
Effective communication with older adults with advanced illness requires a diverse and flexible set of skills. Techniques including ASK-TELL-ASK, use of NURSE statements to address emotion, and the SPIKES model for delivering difficult news can be invaluable in overcoming some of the challenges to caring for older adults, including sensory deficits, memory problems, and prognostic uncertainty for patients facing multimorbidity and/or frailty. Despite the challenges, having these important conversations can be extremely rewarding. Most patients want to discuss these important issues with their providers. Furthermore, conversations around a patient’s quality of life, goals of care, and advance care planning allow clinicians to match our treatment recommendations to patient’s ultimate goals and values to provide them with the best care possible.
FURTHER READING
AGS Choosing Wisely Workgroup. American Geriatrics Society identifies five things that healthcare providers and patients should question. J Am Geriatr Soc. 2013; 61(4):622–631.
Back AL, Arnold RM, Baile WF, Tulsky JA, Fryer-Edwards K. Approaching difficult communication tasks in oncology. CA Cancer J Clin.
2005;55(3):164–177.
Back AL, Arnold RM, Tulsky JA. Mastering Communication with Seriously Ill Patients: Balancing Honesty with Empathy and Hope. Cambridge, MN: Cambridge University Press; 2009.
Clayton JM, Hancock KM, Butow PN, et al. Clinical practice guidelines for communicating prognosis and end-of-life issues with adults in the advanced stages of a life-limiting illness, and their caregivers. Med J Aust 2007;186:S77, S79, S83–108.
Fischer R, Ury W, Patton B. Getting to Yes: Negotiating Agreement Without Giving In. New York, NY: Penguin Books; 1981.
Mitchell SL, Teno JM, Kiely DK, et al. The clinical course of advanced dementia. N Engl J Med. 2009;361(16): 1529–1538.
Ritchie CS, Roth DL, Allman RM. Living with an aging parent: “It was a beautiful invitation.” JAMA. 2011;306(7):746–753.
Smith AK, Williams BA, Lo B. Discussing overall prognosis with the very elderly. N Engl J Med. 2011;365(23): 2149–2151.
Sulmasy DP, Snyder L. Substituted interests and best judgments: an integrated model of surrogate decision making. JAMA. 2010 3;304(17):1946–1947.
Teno JM, Gozalo P, Trivedi AN, et al. Site of death, place of care, and health care transitions among US Medicare beneficiaries, 2000-2015. JAMA.
2018;320(3):264–271.
The Gerontological Society of America. Communicating with Older Adults, An Evidence-Based Review of What Really Works. Washington, DC; 2012.
van Vliet LM, Lindenberger E, van Weert JC. Communication with older, seriously ill patients. Clin Geriatr Med. 2015;31(2):219–230.
von Gunten CF, Ferris FD, Emanuel LL. The patient-physician relationship. Ensuring competency in end-of-life care: communication and relational skills. JAMA. 2000;284(23):3051–3057.
Widera EW, Rosenfeld KE, Fromme EK, Sulmasy DP, Arnold RM. Approaching patients and family members who hope for a miracle. J Pain Symptom Manage. 2011;42(1):119–125.
Chapter
72
Ethical Issues
Timothy W. Farrell, Caroline A. Vitale, Christina L. Bell, Elizabeth K. Vig
INTRODUCTION
Health care professionals who work with older individuals in diverse settings such as their homes, assisted living facilities, skilled nursing facilities, outpatient clinics, and hospitals may encounter ethical dilemmas in their work. Ethical dilemmas arise when there is uncertainty or disagreement between stakeholders about the right thing to do in a situation. Ethical dilemmas pertinent to older patients may include questions about the balance of patient privacy versus patient safety, patients’ abilities to make their own medical decisions, decisions about preferred treatments at the end of life, and decisions made by surrogate decision-makers.
In this chapter, we will discuss a framework for approaching ethical dilemmas, and provide an overview of the ethics topics especially relevant to the care of older adults. Sections include: an approach to thinking about ethical dilemmas, moral distress in professional caregivers, breaking bad news, surrogate decision-making, ethical decisions at the end of life, dilemmas between honoring self-determination and individual/societal safety, ethical challenges in the nursing home setting, and resource allocation and ageism.
AN APPROACH TO THINKING ABOUT ETHICAL DILEMMAS
Case
Mrs. C is an 88-year-old woman with congestive heart failure, hypertension, atrial fibrillation, and chronic kidney disease. She lives alone in her own home. She was hospitalized last month for delirium secondary to a urinary tract infection. She is admitted again this month after a fall. While in the hospital, she is evaluated by physical therapy and occupational therapy who both recommend a short stay in a nursing home for rehabilitation. Mrs. C refuses to go to a nursing home and demands to return to her home. Her medical team insists that going directly home is not a safe discharge plan, and wants to require her to go to rehab.
Question: What is an ethically appropriate discharge plan for Mrs. C?
Health care professionals working with older patients may encounter ethical dilemmas, regardless of their discipline or work setting. Ethical dilemmas usually stem from a conflict in values between stakeholders. For example, an older patient with multiple comorbidities, who values being alive regardless of his condition, may disagree with his physician about the best way to manage his advanced kidney disease. To add complexity, older adults are a heterogeneous group with diverse values, preferences, and goals that may or may not be influenced by their race, ethnicity, or religious backgrounds. In terms of their beliefs about end-of-life care, for example, older patients may have more heterogeneous views than their clinicians. Because of this heterogeneity, clinicians should not assume that they know a patient’s values and goals without engaging in a conversation with the patient.
Learning Objectives
Discuss the steps in the CASES approach and how it is used to address ethical dilemmas.
Define moral distress and identify risk factors for developing it.
Explain the steps in the SPIKES approach to breaking bad news.
Understand the need for surrogate decision-making to ensure appropriate care of many older adults.
Explore the concepts of substituted judgment versus best interest.
Gain insight into the experience of surrogate decision-makers and potential for surrogate distress after making difficult medical decisions.
Describe the indications for palliative sedation.
Recognize the barriers to cessation of driving.
Discuss examples of ethical dilemmas that arise in nursing homes when the desire to maintain quality of care indicators is in conflict with promoting patient preferences and quality end of life care.
Recognize ethical principles underlying health care resource allocation under conditions of resource scarcity, including arguments against excluding patients from healthcare resources based on age.
Key Clinical Points
Health care professionals who work with older patients may encounter ethical dilemmas which arise out of conflicts in values about the right thing to do.
Patients with decisional capacity have the right to make decisions that their medical teams and families don’t agree with.
Moral distress is a condition in which clinicians feel forced to provide care they believe is wrong, which can lead to burnout and to them leaving their jobs and professions.
The majority of people want to be told if they have dementia.
Making medical decisions for loved ones can be very stressful for surrogate decision-makers.
Through a process of shared decision-making, patients or their surrogates share information about their goals and care preferences with clinicians, and their clinicians share pertinent medical information and make care recommendations based on the patient’s values and goals.
Palliative sedation is a legal intervention of last resort to promote relief of intractable suffering for terminally ill patients who have comfort as their primary goal of care.
Physician aid in dying (PAD), also called physician-assisted suicide, is legal in some US states and countries. Euthanasia is not legal in the United States, but is legal in Belgium and the Netherlands.
Patients may raise the topic of PAD as a way to begin a conversation about the end of life. Clinicians should view this as an opportunity to talk about sources of intractable suffering and fears about dying regardless of whether or not they live in a state where the practice is legal.
Questions about whether or not to report concerns about an older individual’s driving are answered by weighing the clinician’s duty to protect the patient’s safety, with the duty to maintain confidentiality, and the duty to protect the public. Some US
states have mandatory reporting requirements.
Ethical issues prevalent in nursing homes include not only those in which the patient and family’s care goals are in conflict, but also those in which regulations are in conflict with quality end of life care.
Triage committees, and not front-line health care providers, should make decisions about allocation of limited heath care resources under conditions of resource scarcity, and patient age should not factor into these decisions.
Ethical dilemmas may be distressing to health care professionals.
Although health care professionals learned about ethics in their training, many are not familiar with a standard approach to thinking about and responding to ethical dilemmas. Without knowledge of such an approach, ethical dilemmas encountered may lead to moral distress and burnout.
Familiarity with a framework and using a standard approach to thinking about ethical dilemmas may help health care professionals to find ways to resolve them, while honoring patient preferences, and reducing their own stress.
Different frameworks for approaching ethical dilemmas exist. One approach is the four-box method from Jonsen et al., in which important information about the patient’s medical condition, preferences, quality of life, and other contextual features of the case is collected. Another is the CASES approach to ethical dilemmas which has been developed and is used by
ethics consultants throughout the Veterans Administration (VA) health care system (see Table 72-1 and Further Reading). Although this approach was developed for use by ethics consultants, the approach also can be applied by others who are trying to better understand ethical dilemmas. Since the second step of the CASES approach is similar to the four-box method, let us apply the CASES approach to the case of Mrs. C above.
TABLE 72-1 ■ THE CASES APPROACH
C-Clarify
Do •this ca�e involv a11 thics GO•ncerנ.1about ,11 1igl1t t]1i1וg to do?
Wlרat is the oonflict in alues tlו.a•t leading tס•tl1 concern?
Valu ar •defin·d a strongly h ld beliefs:) ideal ,
p incipJe , o.ta11da1·ds hat i1rfo 111 .tl1ical deci io11s or ac:tion .
A�Asse11ו.ble the relevan.t informatio11
Wl1at types o•f in:ס·1·ma joנ1 is neededו
Medical facts
11. Patie11ts pretד re1דc s נ1d n1te1:ests
111. 0 ]1 r pa1·t1es' prefer 11c s and inteiר sts
iv. E hics kנ1owledge
Co,de of J1ic� hics guid li11es) an,d cons J1sus stateו11:e1ך·ts
Publis.hed literature
Precede11 cases
I1דstituti•o11al policy aתd docu111entsו a11d law
Outsid etl1ics experts
Wl1icנ.1stake]1olders 11eed to be i11terviewed?
-' ynthesize tl1e iJנfo1·mat1on a.. A11alyze the i11for1ז1atio11
i. Rev1ew 11e raזוge o·f eth1cally justifiable ,optioזוs
ti. Weigh the •differe11t pote11t[al outcornes. aגוd tJ1e i.1npact of these outGס•n1es 011 the s akel1olders
E-Explai1 tlוe synth si
5. -· upport tlו. con ו.tlta·ion pr,oc s •
a. Fo.]1 w up with tak holders
C,01nmu·ם.i.cate the ethically justifiab].e ,opti,011s to th.e k.ey· stakehold •rs
Datajןןגזn Bcrku1vit'l:KA�Cf1a1וko.BL, Foglia iWB. et a1. Etlוia Coווs,(lt,itiQn: Re.spסי11diז1g t(} Et/ןf.cs Questi.011s iו·r flea1th Caןיe, 2,1d ed. iי\;asl1iווgt•(}1ו, Dc.-:;U.S. Departוזient ofVeterc1,1s Ajfairs;2015.
The first step of the CASES approach is to Clarify whether the dilemma is really about ethics or something else, such as a legal issue. This is done by identifying whether there is a conflict in values between two of the involved stakeholders about the right thing to do. To do this, we try to understand the perspectives and values of the involved stakeholders, and identify which values are in conflict. In the case of Mrs. C, the conflict in values is between Mrs. C, who doesn’t want to go to a nursing home for rehab, but wants to go home, and her medical team that believes sending her home would be harmful to her. This dilemma also can be viewed as a conflict between ethical principles. In the case of Mrs. C, the conflict in principles is between honoring Mrs. C’s autonomy/self-determination and promoting both beneficence and nonmaleficence.
In the second step of the CASES approach, information relevant to the case is Assembled. This information includes medical information, patient preferences, other parties’ preferences, and ethics knowledge. In the case of Mrs. C, we already know that she has multiple medical problems, lives alone, is in her second hospitalization, and is staunch in her refusal to go to a nursing home for rehab. Knowing her reason for refusing this would be helpful. For example, she might tell us that her husband died in a nursing home after a prolonged decline from dementia, and that she gets flashbacks and panic attacks whenever she enters a nursing home. Or, she might tell us that she needs to get home to care for her frail neighbor. We would want to hear more from Mrs. C’s medical team about their concerns, and what they believe they can and cannot allow their patients to do. In thinking about this case, it also might be helpful to look at the hospital’s policies around informed consent and refusal of care, and to search for pertinent literature about this issue (see Further Reading).
Once we have gathered the relevant information about Mrs. C’s case, including the stakeholders’ perspectives, we would need to begin to consider the different possible options and the impacts of each option on the stakeholders. In the Synthesize step, we would weigh the potential risks and benefits to Mrs. C of honoring her preference and sending her home versus insisting that she be discharged to sub-acute rehab. An important aspect of the case is whether Mrs. C. has decisional capacity to make the “poor” decision to return to her home or not. (Decisional capacity is discussed in Chapter 10.) In assessing her capacity to make this specific decision, it
would be important to find out if she understands the risks and benefits of going to a nursing home for rehab versus going directly home.
If Mrs. C has decisional capacity to make the decision to go home, her medical team cannot force her to go to a nursing home against her will. In light of this, her team should consider ways to promote her continued recovery and safety at home. This could include arranging for more resources in her home, such as a visiting nurse, physical therapist, occupational therapist, Meals on Wheels, and a product she could use to quickly call for help in the future. Or it might involve brainstorming about how to get help for her frail neighbor, so she would feel more comfortable going to the nursing home for rehab.
If Mrs. C lacks decisional capacity to make the decision to go home, then the medical team will need to involve her legal surrogate decision-maker to help make this decision. Since some states have laws forbidding placement of individuals in nursing homes against their will, finding a solution could prove to be difficult, and might involve substantial negotiating and developing creative solutions.
When the CASES approach is used by health care professionals to understand the nuances of ethical dilemmas in their work, the last two steps of the approach are less critical than they are for ethics consultants. These last two steps are included for ethics consultants to help ensure that their recommendations are shared with those involved with the case and that appropriate follow-up occurs. However, as a health care professional uses this approach and gains a deeper understanding of a case, he/she may want to share that information with others who are involved. It also may be helpful to identify recurrent ethical dilemmas within an organization, and determine if systems level interventions are needed.
ETHICAL DILEMMAS CAN LEAD TO MORAL DISTRESS IN PROFESSIONAL CAREGIVERS
Case
Mr. G is a 78-year-old skilled nursing facility resident with multiple medical problems, including mild dementia, who is admitted to the intensive care unit with pneumonia and possibly sepsis. He is intubated. Over the next 2 weeks, he develops acute respiratory distress syndrome (ARDS), Clostridium difficile colitis, and a pressure ulcer. His renal function is deteriorating. He
remains delirious, and is therefore unable to participate in medical decision- making.
Mr. G’s advance directive states that he wouldn’t want life-prolonging measures for a terminal condition, and would want care focused on comfort. His wife, whom he designated as his health care agent through a Durable Power of Attorney for Health Care, insists that treatments with the goal of life prolongation be continued. His nurses are concerned that his prognosis is poor and his preferences aren’t being honored. His physicians say he isn’t “terminal” yet.
His nurses are concerned that he has pain. They note that he grimaces when they care for him. His wife refuses for him to have pain medications because of a fear that they will prevent him from waking up.
When the doctors, nurses, social workers, and the chaplain have tried to talk to Mrs. G about her husband’s goals of care or his pain management, she becomes agitated and threatens lawsuits. Team members are frustrated.
Rumors begin to circulate that Mrs. G is demented, in denial, revengeful, and wanting to keep him alive for his pension. Some have commented that scarce resources are being wasted in keeping him alive.
Question: In reading this case, can you sense the distress experienced by those caring for Mr. G?
Moral distress is a term that was coined by Jameton in 1984. Moral distress occurs when someone or something prevents you from doing what you believe is right, forcing you to act contrary to your core values. In other words, you believe you are providing care that is morally wrong. Although moral distress has been studied most in nursing, it has been documented in numerous health care professions, and even in health care leaders. Health care professionals who interact with older patients may personally experience moral distress, and also may interact with colleagues who are experiencing it.
There are numerous root causes of moral distress (see Table 72-2).
These causes may be due to clinical factors of a case, but also may be due to individual characteristics of members of the care team, or to institutional factors. Multiple members of the care team may experience moral distress about a given case, but may experience this distress from different aspects of the case.
TABLE 72-2 ■ SELECTED ROOT CAUSES OF MORAL DISTRESS
One theory of moral distress holds that one’s level of distress does not return to baseline after experiencing an episode of moral distress, but that, with each case, one’s “moral residue” increases to a new baseline. Thus, over time, there is a crescendo in a clinician’s moral residue. This crescendo of moral residue and “untreated” moral distress can lead to burnout, to individuals leaving their positions, and even to individuals leaving their professions. This can be psychologically costly for individuals and financially costly for institutions (Figure 72-1).
FIGURE 72-1. Impacts of moral distress on the patient, clinician, and organization. (Adapted with permission from Corley MC. Nurse moral distress: a proposed theory and research agenda. Nurs Ethics. 2002;9[6]:636–650.)
After recognizing that a case is causing moral distress, what can health care professionals do about it? One helpful intervention is for the team to talk about the case and the aspects of the case that are giving each of them moral distress. Building one’s moral resilience may be helpful (see Further Reading). Finally, if organizational factors are leading to moral distress in staff, these need to be brought to the attention of the organization’s management.
TRUTHFUL DISCLOSURE
Case
Mr. W is an 83-year-old man who is brought to see his primary care provider by his wife. She confronts the physician outside the examination room and expresses concern that her husband has become more forgetful and no longer is able to pay their bills. She remarks, “If he has Alzheimer’s disease, please don’t tell him.”
Question: What is the ethically justifiable option for the clinician after he examines Mr. W?
In the past, when medicine was more paternalistic, physicians did not routinely inform patients of life-threatening diagnoses. The rationale for not sharing diagnoses, such as cancer, with patients was that this information would cause harm. However, most people want to know their diagnoses. A systematic review of 23 articles with over 9000 participants found that 91% of cognitively intact people and 85% of cognitively impaired people wanted to know if they had dementia. A second study found that there were no increases in rates of depression or anxiety after people learned of a dementia diagnosis.
As medicine has moved from being paternalistic to being patient- centered, the practice of withholding information from patients decreased. In this era of patient-centered care, patients should be offered the opportunity to learn about their medical conditions. Patients also have the right to decline to hear about their conditions. In certain cultures, it is still standard to not share information about serious illness with patients, but with their family members (ie, familismo). In some cultures, there is a belief that talking about bad things will make them happen. Thus if a patient doesn’t want information, for any reason, giving this information without permission is not ethically justifiable, and could cause harm.
One method advocated for use when breaking bad news is the six-step SPIKES approach (see Further Reading). The steps of this approach are explained in Table 72-3.
TABLE 72-3 ■ THE SPIKES APPROACH TO BREAKING BAD NEWS
In response to the case of Mr. W, it would be appropriate to understand Mrs. W’s reasons for not wanting her husband to know his diagnosis.
Perhaps he has asked her on multiple occasions not to ever tell him if he develops dementia, because of a previous experience with a family member
who suffered from dementia. In this case, it might be appropriate not to share this information with Mr. W. Ultimately, however, he has a right to this information, if he wants it. After hearing her reasons, we could use the SPIKES approach to talk to Mr. W. We would ask about his understanding of his current situation and whether he would want information or not. In the case of dementia, it is appropriate to share information about the diagnosis with a patient with mild dementia who has decisional capacity to make the decision about whether or not to hear the cause of his memory loss. It might not be appropriate to repeat this information over and over to a patient with more advanced dementia, who might experience the emotional distress of receiving this diagnosis multiple times or might not be able to fully understand what this means.
DECISION-MAKING IN CHRONIC AND ADVANCED ILLNESS
Case
Mrs. S is an 82-year-old woman with mild dementia, coronary artery disease, hypertension, and peripheral vascular disease. She misses a meal in the dining room of her assisted living facility, and is found in her apartment lethargic and confused. She is sent to the hospital, admitted to the ICU, and diagnosed with urosepsis. She has a Physician Orders for Life-Sustaining Treatment (POLST) form, which accompanies her to the hospital, with preferences for Do Not Resuscitate status (DNR), Limited Additional Interventions, antibiotics to prolong life, and a trial of use of a feeding tube.
Despite aggressive care in the ICU, she remains delirious. Over the next 2 weeks in the ICU, she suffers a non-ST segment elevation MI, develops a pneumonia thought secondary to aspiration, and her kidney function worsens. Her adult children, who are her legal decision-makers, are given regular updates, but are overwhelmed by the numerous medical decisions they are asked to make including: whether to insert a temporary feeding tube in her nose, followed later by whether to insert a more permanent tube in her stomach, and whether to start dialysis. Although they had initially told the ICU to do “everything” to make their mother better, short of resuscitating or intubating her, they begin to wonder if they are doing the right thing. They wonder how long their mother would want this level of care continued.
During a routine meeting with the family, the ICU team uses the term “futile”
and recommends that they change the goal of care to focus on maximizing their mother’s comfort.
Question: What should the family and ICU team consider in deciding how to proceed?
In older patients with chronic medical conditions, a multitude of medical decisions may arise during acute illness, and in the course of worsening of the chronic conditions. When patients are unable to make their own decisions, surrogates are called on to make these decisions. Decision-making for others can be stressful for surrogates, while knowledge of a loved one’s preferences can reduce the burden of decision-making for surrogates. In the case of Mrs. S, her children have attempted to implement her wishes, but her condition has worsened, and they are no longer sure of the right thing to do.
In what follows, several decisions related to this case will be discussed.
We operate under the notion that the patient possesses a fundamental legal and moral right to have his or her care preferences respected. Understanding any prior conversations or specific directives (such as her POLST form), and knowing her overall values, hopefully Mrs. S’s family would be able to speak on behalf of Mrs. S, as though they were relaying her explicit wishes, or “standing in her shoes.” This concept illustrates the ethical principle of substituted judgment. When such patient preferences are known, it is expected that the patient’s surrogate decision-maker would operate under this principle, communicating the patient’s wishes, and choosing the option that would best fit her stated care preferences.
Often, the patient’s explicit wishes regarding a specific clinical scenario are not known. It can be difficult to anticipate all of the different clinical scenarios and medical decisions that may need to be considered in one’s future ahead of time. In this type of situation, surrogate decision-makers often need to rely on the principle of best interest. In deciding what satisfies a person’s best interest, the surrogate decision-maker—ideally with guidance from the primary team of health care providers caring for the patient—is able to objectively weigh risks, potential benefits and treatment burdens to make an individualized decision for the patient. However, when evidence of the patient’s wishes does exist, making decisions by substituted judgment should take precedence so that the patient’s care preferences are honored.
At the beginning of her hospitalization, Mrs. S’s children believed that they were doing what she would want in inserting a temporary nasogastric tube. When she didn’t improve despite aggressive interventions and remained unable to swallow safely, her children were less sure if she would want a more permanent tube inserted. Although the literature, position statements from organizations such as the American Geriatrics Society, and the Choosing Wisely campaign have argued against the insertion of feeding tubes in patients with advanced dementia (which is discussed elsewhere), the appropriate use of feeding tubes in other situations is less clear. In Mrs. S’s situation, the decision of whether or not to place a permanent feeding tube depends on her overall goals of care. It is not clearly right or wrong to place it.
Another decision that families often will make for their loved ones is whether to treat infections with antibiotics. Antibiotic therapy is thought to be beneficial and the usual standard of care in conditions such as pneumonia and urinary tract infections, which often arise among older adults. In some patients with advanced serious illness, however, antimicrobials can carry substantial risk and treatment burdens that need to be carefully weighed, especially among patients receiving end of life care. For example, patients receiving antimicrobials are at increased risk of acquiring infections with resistant organisms as well C. difficile infections. Some of the unintended consequences of antimicrobial therapy not only include burdens to the individual patient, but also extend to other patients, and to society through the possible selection of antimicrobial resistance. Studies are conflicting as to whether antimicrobial treatment of pneumonia may provide symptomatic benefit among patients with advanced dementia. A recent prospective study of patients with advanced dementia and pneumonia demonstrated an association between antimicrobials and increased discomfort. For patients receiving comfort-focused care near the end of life, the treatment of pain, fever, and dyspnea with opioids and antipyretics remains effective to provide symptomatic benefit. Whether a course of antibiotics is warranted will depend on the risks and benefits of the treatment and whether providing that course of treatment is consistent with the patient’s goals of care.
In living with chronic conditions, older patients may have acute or chronic worsening of their kidney function. The number of older individuals who are beginning chronic dialysis is increasing. Many of these individuals start dialysis while in the hospital for an acute illness. Studies have shown
that older people with poor functional status, multiple chronic conditions, and age over 85 when starting dialysis may not necessarily live longer with dialysis. Conservative management programs for older patients with advanced kidney disease who are forgoing dialysis are prevalent in the United Kingdom, but not elsewhere. In deciding about whether or not to begin dialysis, Mrs. S’s children need to be fully informed about the risks and benefits of short- and long-term dialysis. As with other decisions, the decision to start dialysis depends on Mrs. S’s goals of care.
After a trial of aggressive therapies aimed at life prolongation, Mrs. S’s clinicians begin to talk about her situation being “futile.” This term is used by health care professionals, but not by lay people, and therefore should not be used in routine conversations with patients or their loved ones. The term is also commonly used when clinicians don’t believe that continuing on the current trajectory is “worth it.” Different definitions of futility exist, including quantitative futility (very low probability of survival) and qualitative futility (patient’s quality of life is below an acceptable standard). These definitions can be problematic because clinicians cannot accurately predict a patient’s chances of survival and may not be able to separate their own views about acceptable quality of life from the patient’s perceived quality of life. In the case of Mrs. S, decision-making might be easier if both the medical team and her family were clear about her goals of care; what makes life worth living for her (in keeping with the Institute for Healthcare Improvement and John A. Hartford Foundation’s Age-Friendly Health Systems’ 4Ms Framework in which eliciting what Matters to the patient is emphasized), and would she consider life to be acceptable if she lived in a nursing home, leaving three times a week for dialysis, and being fed through a tube.
When patients are very sick and care is focused on life prolongation, it can be hard for clinicians and family members to step back and examine the big picture. When the goal is to do “everything,” medical teams may want to better understand what this means to a patient and family. They should not feel compelled to offer every intervention that is technically feasible.
Instead, they should consider whether the intervention is clinically meaningful, and whether it will help to meet the patient’s goals of care.
Mrs. S’s medical team has recommended a change in the focus of her care from life prolongation to comfort care. Although this may be a clinically acceptable option, it is important that care recommendations be based on an
understanding of the patient’s values and preferences. Making recommendations tailored to patient preferences would be consistent with a shared decision-making approach. At the same time, it is important to acknowledge that although withholding and withdrawing care is ethically equivalent, withdrawing life prolonging measures may be more psychologically difficult for decision-makers.
In deciding how to proceed with Mrs. S’s care, consulting a palliative care team may be helpful. The palliative care team could not only help the medical team develop a richer understanding of Mrs. S’s values, care preferences, and goals of care by engaging in in-depth conversations with her children about her life, but also support her family during a stressful time.
MANAGING REFRACTORY SYMPTOMS CAUSING INTRACTABLE SUFFERING NEAR THE END OF LIFE
Case
Mr. P is a 90-year-old man with widely metastatic prostate cancer receiving hospice services in a nursing home. He continues to have severe bone pain from his metastatic disease despite the use of opioids, steroids, zolendronic acid injections, and a course of palliative radiation. As his opioid doses are increased, he develops myoclonus and confusion. Rotation to two different opioids over the next two weeks is minimally effective. His confusion worsens, becoming hyperactive delirium with bouts of yelling and attempting to climb out of bed. Addition of antipsychotic medications is only slightly effective. His family and the nursing staff are distressed about his situation and their inability to ensure his comfort and safety.
Question: What are the ethically and legally appropriate options to reduce Mr. P’s suffering?
In terminally ill patients such as Mr. P, questions may arise about ways to reduce his intractable suffering and promote his comfort at the very end of life, to ensure a peaceful death. His family may ask if there is something that could be done to end his suffering, such as euthanasia. Those taking care of him may wonder about the use of palliative sedation or physician aid in dying (PAD; also known as physician-assisted suicide) and whether
alternative options exist. The differences between these entities can be confusing to families and clinicians, and are defined in Table 72-4.
TABLE 72-4 ■ KEY TERMS, DEFINITIONS, AND LEGAL STATUS OF INTERVENTIONS TO MANAGE INTRACTABLE SUFFERING AT THE END OF LIFE
Palliative sedation is legal in the United States. It is an intervention provided to relieve intractable and intolerable symptoms at the end of life, and is accomplished by intentionally lowering the patient’s level of consciousness using sedating medications. Palliative sedation should be used as an intervention of last resort in the setting of a palliative care
interdisciplinary team strategy to achieve symptom control within the spectrum of palliative care interventions. The American Medical Association, the American Academy of Hospice and Palliative Medicine, and guidelines from Europe, Canada, and Japan consider palliative sedation to unconsciousness to be appropriate as a last resort in terminally ill patients with goals for comfort and severe refractory distressing symptoms including agitated delirium, dyspnea, pain, nausea and vomiting, shortness of breath, urinary retention due to clot formation, gastrointestinal pain, uncontrolled bleeding, and myoclonus. Palliative sedation for suffering that does not result from physical symptoms, but from existential, psychological, or spiritual distress remains controversial.
The recommendations on palliative sedation from the American Academy of Hospice and Palliative Care in 2014 are summarized as follows: (1) it is ethically defensible if used after careful interdisciplinary evaluation and treatment of the patient; (2) it should be used when palliative treatments that are not intended to affect consciousness have failed or, in the judgment of the clinician, are very likely to fail; (3) it should be used where its use is not expected to shorten the patient’s time to death; and (4) it should be used only for the actual or expected duration of symptoms. Ideally, the patient, family, and participating interdisciplinary health care professionals should be in agreement that this intervention is consistent with patient goals and is the right thing to do. Health care professionals who believe that participating in palliative sedation is morally wrong should not be forced to participate in the care of patients undergoing this intervention. Treatment of existential suffering should include the full interdisciplinary team including mental health and spiritual care experts. Respite sedation, or sedation for a limited time with subsequent removal of sedation to reassess the patient, may be an alternative approach.
An alternative to palliative sedation is the voluntary cessation of eating and drinking (also called voluntary stopping of eating and drinking or VSED). This practice is legal if chosen because of intractable suffering by a competent person with decisional capacity who is approaching the end of life. Many individuals who are approaching the end of life do not have an appetite, and thus do not experience discomfort from VSED. For others, symptoms resulting from voluntary cessation of eating and drinking may require intervention by a palliative care team. Individuals are encouraged to let their medical teams and families know if they decide to pursue VSED.
However, there are cases in which older individuals who had decided to pursue this were evicted from long-term care facilities due to legal concerns.
VSED is a patient-initiated, voluntary process, which necessitates that the patient remains competent at least at the start of the process. The health care team may need to palliate symptoms of thirst and delirium, and sustain the continued withholding of food and fluids from some patients who pursue VSED. Due to the loss of decisional capacity inherent in the progression of dementia, VSED in the setting of dementia deserves further thoughtful consideration. While some dementia-specific directives allow for delineation of potential withholding or withdrawing of food and fluids (ie, stopping eating and drinking by advance directive, or SED by AD) when one’s dementia progression reaches a certain point, implementation of these directives across varying state jurisdictions will need to be understood and studied further. While the concept of SED by AD has gained traction among some advocates for advance care planning, these directives have also been criticized for utilizing a potentially problematic ethical concept called “precedent autonomy” in which a fully cognizant and healthy patient directs their care preferences to be carried out after loss of decisional capacity, should their degree of functional incapacity and circumstance meet preset conditions, regardless of the clinical situation and social setting in which the patient might land. SED by AD is a process in which a surrogate decision- maker directs the withholding of food and fluids when a certain degree of cognitive and functional impairment is reached due to dementia progression. Advocates have focused on the ethical principle of patient autonomy to direct preferences for SED by AD, as is the case with more general advance directives. However, others, including a recent position statement put forth by the Ethics Subcommittee of AMDA—The Society for Post-Acute and Long-Term Care Medicine, argue that the principle of justice (the obligation to treat all individuals equally regardless of race, gender, cognitive or physical ability needs to be the overarching principle to guide these decisions), as well as the physician’s obligations to beneficence and nonmaleficence, thereby advocating that comfort feeding should be implemented for those with advanced dementia. Some of the more detailed directives, that include specific preferences for SED by AD, are difficult to implement in care settings such as nursing homes and inpatient hospices.
Specific instructions to discontinue hand feeding in advanced dementia or to instruct surrogates to not allow hand feeding if one loses the ability to feed
oneself are potentially difficult to implement in care settings where the provision of oral food and fluids are considered basic and comfort-oriented care, especially if the patient appears to derive enjoyment comfort out of eating or tasting food. A policy of comfort feeding in advanced dementia, on the other hand, allows for continued assisted feeding for those who are accepting food and fluids as tolerated until their feeding behaviors indicate refusal or show signs of any distress with eating.
PAD, also called physician-assisted suicide or death with dignity, is legal in some US states and the District of Columbia. The patient pursuing PAD intentionally self-administers lethal doses of medications with the intention of ending their suffering. The Death with Dignity laws in Oregon and Washington and similar laws in other states in the United States stipulate that only terminally ill patients who are decisionally intact and voluntarily making this request are eligible to use the law. Interested individuals are required to make two verbal and one written request, and must be adult residents of their state. Two attending physicians must verify that the patient is terminally ill with intact decisional capacity. The majority of people requesting this are already receiving hospice services, and often make the request because of concerns about loss of autonomy and control.
Approximately one-third of individuals who obtain medications do not use them. Finally, patients may raise the subject of PAD with their clinicians as a way to begin a conversation about death and dying. Even if clinicians cannot or will not participate, they should still view this as an opportunity to talk about the patient’s sources of unbearable suffering, and fears about dying.
Euthanasia is a process by which an individual, such as a clinician, intentionally administers a lethal dose of medication to a terminally ill patient, which directly ends their life. In contrast to PAD, euthanasia is not legal in the United States. Internationally, it is legal in Belgium and the Netherlands, which have widespread palliative care policies and availability of palliative care professionals. In these countries, the practice of euthanasia is considered one option to manage intractable suffering at the end of life, and is provided in conjunction with palliative care. Of note, a qualitative study characterizing the Dutch experience with euthanasia or physician-assisted suicide requests among older adults without terminal illness and with multiple geriatric conditions between 2013 and 2019 was recently published. These older adults attributed unbearable suffering to a combination of medical, social, and existential issues that include loss of
mobility, fear of decline, social isolation, and loss of meaning. This study found that patients whose suffering was attributed to multiple geriatric conditions accounted for 4% of all cases of euthanasia or physician-assisted suicide in the Netherlands during this period, calling attention to the potential for a “slippery slope” of allowing the use of euthanasia or physician-assisted suicide for suffering due to nonterminal conditions that could instead be addressed with enhanced geriatric or palliative supportive efforts to enhance quality of life.
The ethical distinction between palliative sedation and euthanasia has been debated, hinging on three areas: intention, monitoring, and methods. Palliative sedation does not have the intention to hasten death, the patient is monitored to provide the minimum level of sedation required to provide symptom relief, and the medication is carefully titrated to achieve this minimum level of sedation required to provide symptom relief. The American Medical Association (AMA) asserts that the distinction between palliative sedation and euthanasia may be evident on examination of the medical record, as repeated doses and continuous infusions suggest palliative sedation, while one large dose or rapidly accelerating doses out of proportion to the level of immediate patient suffering may reflect an intention to hasten death. Other indications of inappropriate palliative sedation include inadequate communication and support between patients, relatives, and staff; inadequate evaluation of psychological distress and existential suffering; and inadequate monitoring. There are multiple published guidelines on appropriate indications, monitoring, and methods for palliative sedation, and most guidelines include the recommendation that cases in which this procedure is being considered should include a physician with expertise in palliative care.
BALANCING AUTONOMY AND SELF- DETERMINATION WITH INDIVIDUAL AND SOCIETAL SAFETY DRIVING
Case
Mr. J is an 89-year-old man living temporarily in a skilled nursing facility where he is receiving physical and occupational therapy after he fell and sustained a patellar fracture. He is doing well, and the interdisciplinary team informs you that he will be ready to discharge home soon. He lives in a one-
story home with his wife. He drives to the neighborhood shopping center to meet his friends for lunch, buy groceries, and do other chores. During your assessment, you note that his memory is mildly impaired, with only one out of three items recalled after 5 minutes, and he has difficulty with attention. His clock-drawing test is markedly abnormal, with impaired spacing and numbering.
Questions for the interdisciplinary team: Should he still be driving? What do we do about it?
Decisions regarding older adults’ driving ability can lead to ethical dilemmas for health care professionals, especially when the patient is upset at the loss of the independence and autonomy that driving provides. Common reasons for recommending driving cessation are epilepsy, stroke, dementia, visual impairment, and medication adverse effects. Strategies to employ when addressing driving decisions with patients are summarized in Table
72-5. Some key ethical considerations with regard to driving include protecting the patient’s safety, the public’s safety, and patient confidentiality. Protecting Mr. J would include counseling him about conditions or medications that may impair his ability to drive safely. Some states have mandatory reporting requirements for physicians regarding driving ability, and some states have found physicians liable for harm to others because their patients were not warned about conditions or medication side effects that could impair driving performance. Physicians must weigh the responsibility for ensuring patient confidentiality against the responsibility for the patient’s safety and public safety. Involving patients and family, and carefully counseling patients, including providing a written letter of recommendations regarding driving cessation, can be helpful to navigate these challenging situations. Careful documentation of a patient’s risks regarding driving and counseling provided, as well as additional referrals made to social workers, driving rehabilitation or other professionals, may reduce clinician liability.
In addition, physicians must make sure they are in compliance with their own state reporting laws for physicians, available from each state’s Department of Motor Vehicles (DMV). Six states have mandatory reporting requirements (California, Delaware, Nevada, New Jersey, Oregon, and Pennsylvania). In states without mandatory reporting laws, providers need to obtain release of information to report patients to the DMV, or they could be held liable for
breach of confidentiality. Some states provide civil immunity if health care professionals report in good faith.
TABLE 72-5 ■ STRATEGIES TO EMPLOY WHEN WORKING WITH OLDER ADULTS REGARDING DRIVING CESSATION
FIREARMS
The possession of firearms by people with dementia is another area in which the autonomy and self-determination of the individual needs to be weighed against the safety of the individual and of society. Of people with dementia, 60% have firearms in their homes and only 17% of families report storing these weapons unloaded and locked. Having firearms in the home increases the likelihood of suicide completion. Ten states and the District of Columbia require a permit to buy a firearm. Hawaii restricts persons with organic brain syndromes from possessing firearms, while Texas prohibits persons with dementia from obtaining a license to carry a handgun in public. There
are “red flag laws” called extreme risk protection order (ERPO) laws in 17 states, which provide a means for court-ordered firearm removal based on reasonable or substantial threat. Strategies for assessing and counseling firearm safety in patients with dementia and their families are shown in Table 72-6. As dementia progresses, the recommendations become more focused on counseling families to restrict access to the firearm and remove the firearm from the home environment. A website has been developed to assist families and caregivers in discussing changes regarding safety in dementia, including firearms, driving and home safety (https://safetyindementia.org/).
TABLE 72-6 ■ RECOMMENDED STRATEGIES FOR COUNSELING PERSONS WITH DEMENTIA AND THEIR FAMILIES REGARDING FIREARMS
5 Ls (Lo,aded,.Locked, Little Children) Low" and Learned)
Is it Loaded?
ls it ock d? {gun sa e�•rigger lock 01· cab). lock iנ1 p!aceי
aזnmurרit:ioiר lo,cked sep·a:rate)
Are it l,e children pre .11.t? י(grandchildr n at risk for inju1·y) Js the per.s011 eeli�וg Low? י(ri.sk of sui,cide)
Is tl1e perso11 Lear1וed? (ktוows how to we tlוe wea·po11י 0·1·is the weapon i1וl1erit.ed or 11ot i1ו.teז1tio11ally p·urc11asedי prese11ce of demeיntia.)
Fjrearm Safety C,ounseling Protoco,l Questio,ת,s to Assess
Acנcess; S:af�ty Profile, and Cap:aciity
Assess d 1nentia se, 1�ity:·
Include p 1· on wi·t:hdeיm 11.tia in as, sm n and coun l
i1וg wh· n de1n, ntia i mil .
0 As ,d m nti s v,-ri·tyincreas sכ th pr-otocol involv s fa.mעy m mb r-s.011ly.
Rjsk a s ssגnent:
° Fw arm: typ , .numb rכ t'a u (! c .d� u_nload,_�.with.
ammu1ו.tion)
0 Pati nt: vision, n ur-ologi,cal cha11g ·�PHQ-2
0 Envi1·onm ·n : 1nall childr n� nag.rs
o Bel1a1.,זhors: last: tiזne lגa11dledldisar·m,ed fim·ear·m
Cow ·•eling b,as •d on risk profil :
0 Sa , s·oג·ag ..{Jocke • unloaded� tored p,arnt·e f1יom
1nmunitio11) sa:i ty · ic )
Access for suזcide and injury-related fa.talities (Lethal Means Access)
Fi:rearm sunset: ways to reזnov,e.flrea.זנןו f1·om ho,me,
remo,re aגr11זnwו1tio11/ di,sabl,e .6re·ar1.n (reגno, ve fu·.iiגg pins)
red tlag la:ws if fa:mily feels unsafe.
Data frotוi Pir1l10Jt E1W, 1Witchtll JD, Bi1ו' le:r!1-1, .ei {.וJ. "ls tl1t:בre ,נ gi1ז1 i1ו ו11e lionוe?" Ass�n the risk,s סf gוtn oivחcrship iז, oldera,i:1/ls.} Am Guriatr סc. 201,1;62(6):J142-llי'l6 and Doucelt ML Dayloi-1 H, .uגpid11s G, et al, Firearn1s, ,ie:ine1-וti,1>arld 1/ו, cli11{cia�נ: devel oן,זז1e11·t סf a S{lfcty cס1111scliחg pזסtoc.ol. J Aנn Geri�tr סc. 2020�68(9):.212,8-2133.
MANAGING SEXUAL INTIMACY AND OTHER ETHICAL ISSUES IN NURSING HOMES
Case
Mrs. N, an ambulatory 87-year-old nursing home resident with moderate dementia, is found by nursing staff in the room of Mr. O, an 85-year-old nursing home resident with COPD, Parkinson disease, and dementia. They are seated on Mr. O’s bed, hugging. Mr. O is wearing his incontinence briefs, but no pants, and is audibly wheezing.
Question: What is the appropriate way to approach sexual intimacy in the long-term care setting?
Common ethical issues encountered in long-term care are summarized in Table 72-7. In general, ethical issues in the nursing home often involve tension between honoring resident autonomy and protecting their health and safety. During the COVID-19 pandemic, for example, many nursing homes prioritized beneficence and safety over resident autonomy. This has resulted in isolation and cognitive decline.
TABLE 72-7 ■ ETHICAL ISSUES IN LONG-TERM CARE FACILITIES
Sexual intimacy in the long-term care setting requires a specific approach that is different from usual medical decision-making capacity evaluations.
One recommended approach is summarized in Table 72-8.
TABLE 72-8 ■ COMPONENTS OF SEXUAL CONSENT IN THE NURSING HOME
In the case, the physician caring for Mrs. N and Mr. O must assess their capacity for sexual consent and the long-term care facility must develop a plan for sexual intimacy within the facility. Involving the residents’ surrogate decision-makers is recommended for open communication and a coordinated approach that balances patient preference (if the patient is determined to have capacity for sexual consent) with safety and privacy.
RESOURCE ALLOCATION AND AGEISM
Case
A hospital system is approaching 100% ICU capacity due to the COVID-19 pandemic. This health system’s crisis standards of care specify that adults older than age 85 should not enter the resource allocation algorithm, meaning that this group would be denied intensive care should crisis care standards need to be invoked.
Question: Which ethical principles should guide the fair allocation of health care resources under conditions of resources scarcity, especially when considering age?
The specter of denying health care resources to patients under conditions of resource scarcity is perhaps one of the starkest examples in which health care professionals rely on sound and transparent ethical principles for guidance. In usual conditions in Western societies, the principle of autonomy dominates health care decision-making, but under conditions of resource scarcity, the principle of justice has greater weight. During the 2009 H1N1 pandemic, many states began adopting crisis standards of care documents designed to provide guidance to hospitals faced with conditions of resource scarcity. More recently, the COVID-19 pandemic in 2020 brought renewed attention to these crisis standards of care as ICU capacity in terms of “space” (ie, ICU beds), “staff” (eg, intensivists, nurses, respiratory therapists), and “stuff” (eg, ventilators, remdesivir) was pushed to the brink, and many hospitals developed scarce resource allocation triage teams and revised relevant protocols.
A key ethical consideration with respect to allocating health care resources under conditions of resource scarcity is that clinicians engaged in patient care on the “front line” should not be making decisions regarding resource allocation. Leaving resource allocation decisions to individual clinicians increases the chance that resources will be allocated in an ad hoc and not a systematic fashion, increases the risk of bias including the potential for decisions to disadvantage underrepresented groups (eg, a first-come, first-serve approach), and also burdens the clinician with moral distress.
Each hospital should have an interdisciplinary triage committee that is removed from clinical care and that applies their state’s crisis standards of care guidelines, with outcomes of these deliberations communicated to clinicians who apply the triage committee’s recommendation. It is critically important that triage committees apply crisis standards of care fairly and that their decisions are subject to retrospective review. Otherwise, public confidence in the process by which limited resources are allocated may be undermined.
In early 2020, reports surfaced in Italy of rationing limited health care resources based on age. In the United States, some states’ crisis standards of care included age-based cutoffs beyond which patients would not be eligible
to receive limited health care resources. Additional age-related provisions include “tiebreaker” provisions in which older age counts against a patient if two patients have identical scores on measures of physiological function such as the MSOFA (Modified Sequential Organ Failure Assessment) score. Although the MSOFA is not based on age, other assessments used to distinguish between patients during times of crisis, such as the 4C mortality score, weight age heavily. While it is the case that older adults with COVID- 19 have a higher overall mortality rate than younger adults with COVID-19, the use of age per se to ration health care resources has been criticized because the use of age alone does not consider individual differences that contribute to short-term prognosis. Other arguments to ration health care resources based on age include long-term predicted life expectancy and utilitarian frameworks such as maximizing “life-years saved.” Long-term predicted life expectancy has been criticized in this context due to difficulty with prognosticating life expectancy and because, among underrepresented patients, it overlooks the fact that life expectancy may be shorter due to social determinants of health outside of patients’ control. The “life-years saved” approach does not consider individual differences among patients and instead withholds resources based on membership in a class (ie, one’s age group). Finally, the “fair innings” argument contends that older adults have a lesser claim on health care resources than younger adults since they have lived through more “innings.” However, opponents of this argument argue that value judgments about which “innings” of life are more important than others cannot be made. Although situations in which patients are truly tied after an exhaustive consideration of individual characteristics such as physiological parameters should be rare, promising approaches to settling these ties using assessments that are not based on age, such as the Clinical Frailty Scale, are gaining acceptance.
Although the principle of justice outweighs the principle of autonomy under conditions of resource scarcity, autonomy should still be respected. It is critically important to consider the role of advance care planning in ensuring that older adults’ preferences are honored even under these difficult conditions. Clinicians should engage in advance care planning conversations, but should be careful to avoid pressuring patients to make decisions for the sole purpose of conserving resources, regardless of whether the patient is healthy or ill at the time the advance care planning conversation occurs.
Some have argued that adults should voluntarily relinquish their claim on
health care resources to benefit younger generations, but this position is controversial and should not be assumed or imposed on older adults.
FURTHER READING
American Medical Directors Association, 2008, White paper, The role of a facility ethics committee in decision-making at the end of life. Available at https://paltc.org/amda-white-papers-and-resolution-position- statements/role-facility-ethics-committee-decision-making. Accessed February 22, 2021.
American Medical Association Code of Medical Ethics. Chapter 5: Opinions on caring for patients at the end of life. Available at https://www.ama- assn.org/system/files/2019-06/code-of-medical-ethics-chapter-5.pdf.
Accessed February 7, 2021.
American Medical Directors Association. 2016, Capacity for sexual consent in dementia in long-term care. Available at https://paltc.org/amda-white- papers-and-resolution-position-statements/capacity-sexual-consent- dementia-long-term-care. Accessed January 10, 2021.
Baile W, Buckman R, Lenzi R, Glober G, Beale EA, Kudelka AP. SPIKES— a six-step protocol for delivering bad news: application to the patient with cancer. Oncologist. 2000;5:302–311.
Betz ME, McCourt AD, Vernick JS, Ranney ML, Maust DT, Wintemute GJ. Firearms and dementia: clinical considerations. Ann Intern Med.
2018;169(1):47–49.
Clinician’s Guide to Assessing and Counseling Older Drivers. Fourth edition. American Geriatrics Society 2019. Available at https://geriatricscareonline.org/ProductAbstract/clinicians-guide-to- assessing-and-counseling-older-drivers-4th-edition/B047. Accessed February 22, 2021.
Epstein, EG, Hamric AB. Moral distress, moral residue, and the crescendo effect. J Clin Ethics. 2009;20(4):330–342.
Farrell TW, Widera E, Rosenberg L, et al. AGS position statement: making medical treatment decisions for unbefriended older adults. J Am Geriatr Soc. 2017;65(1): 14–15.
Farrell TW, Francis L, Brown T, et al. Rationing limited healthcare resources in the COVID-19 era and beyond: ethical considerations
regarding older adults. J Am Geriatr Soc. 2020;68(6):1143–1149.
Fox E, Berkowitz KA, Chanko BL, Powell T. Ethics Consultation: Responding to Ethics Questions in Health Care. Second edition. Available at http://vaww.ethics.va.gov/docs/integratedethics/ec_primer.pdf. Accessed February 22, 2021.
Gruenewald DA. Voluntary stopping eating and drinking: a practical approach for long-term care facilities. J Palliat Med. 2018;21(9):1214– 1220.
Metzer E. Ethics and intimate sexual activity in long-term care. AMA J Ethics. 2017;19(7):640–648.
Preshaw DHI, Brazil K, McLaughlin D, Frolic A. Ethical issues experienced by healthcare workers in nursing homes: literature review. Nurs Ethics. 2016;23(5):490–506.
Quill T, Arnold RM. Evaluating requests for hastened death #156. J Palliat Med. 2008;11:1151–1152.
Rushton, CH. Cultivating moral resilience. Am J Nurs. 2017;117(2 Suppl 1):S11–S15.
Wong SP, Sharda N, Zietlow KE, Heflin MT. Planning for a safe discharge: more than a capacity evaluation. J Am Geriatr Soc. 2020;68(4):859–866.
van den Berg V, van Thiel G, Zomers M, et al. Euthanasia and physician- assisted suicide in patients with multiple geriatric syndromes. JAMA Intern Med. 2021;181(2):245–250.
_for_Pandemics_2020.pdf. Accessed February 22, 2021.
Wax JW, An AW, Kosier N, Quill TE. Voluntary stopping eating and drinking.
J Am Geriatr Soc. 2018;66(3): 441–445.
Wright JL, Jaggard PM, Holahan T. Stopping eating and drinking by advance directives (SED by AD) in assisted living and nursing homes. J Am Med Dir Assoc. 2019;20(11):1362–1366.
Part V
Organ Systems and Diseases
SECTION A
Chapter 73.
Chapter 74.
Chapter 75.
Chapter 76.
Chapter 77.
Chapter 78.
Chapter 79. SECTION B
Chapter 80.
Chapter 81. SECTION C
Chapter 82.
Chapter 83. SECTION D
Chapter 84.
Chapter 85.
Chapter 86.
Chapter 87. SECTION E
Chapter 88.
Chapter 89.
Chapter 90.
Chapter 91.
Chapter 92.
Chapter 93.
CARDIOVASCULAR SYSTEM
The Aging Cardiovascular System Coronary Heart Disease and Dyslipidemia Valvular Heart Disease
Heart Failure Cardiac Arrhythmias
Peripheral Vascular Disease Hypertension PULMONARY
Respiratory System and Selected Pulmonary Disorders Chronic Obstructive Pulmonary Disease NEPHROLOGY
Aging of the Kidney Kidney Diseases GASTROENTEROLOGY
Aging of the Gastrointestinal System and Selected Lower GI Disorders
Upper Gastrointestinal Disorders Hepatic, Pancreatic, and Biliary Diseases Constipation
ONCOLOGY
Cancer and Aging: General Principles Breast Disease
Prostate Cancer Lung Cancer
Gastrointestinal Malignancies Skin Cancer
SECTION F
Chapter 94.
Chapter 95.
Chapter 96. SECTION G
Chapter 97.
Chapter 98.
Chapter 99. SECTION H
Chapter 100.
Chapter 101.
Chapter 102.
Chapter 103. SECTION I
Chapter 104.
Chapter 105.
Chapter 106.
Chapter 107.
Chapter 108.
HEMATOLOGY
Aging of the Hematopoietic System and Anemia Hematologic Malignancies (Leukemia/Lymphoma) and Plasma Cell Disorders
Coagulation Disorders ENDOCRINOLOGY AND METABOLISM
Aging of the Endocrine System and Non-Thyroid Endocrine Disorders
Thyroid Diseases Diabetes Mellitus RHEUMATOLOGY
Myopathies, Polymyalgia Rheumatica, and Giant Cell Arteritis
Rheumatoid Arthritis and Other Autoimmune Diseases Back Pain and Spinal Stenosis
Fibromyalgia and Myofascial Pain Syndromes INFECTIOUS DISEASES
Infection and Appropriate Antimicrobial Selection Bacterial Pneumonia and Tuberculosis
Urinary Tract Infections
Other Viruses: Human Immunodeficiency Virus Infection and Herpes Zoster
Influenza, COVID-19, and Other Respiratory Viruses
73
The Aging Cardiovascular System
Ambarish Pandey, George E. Taffet, Dalane W. Kitzman, Bharathi Upadhya
PRINCIPLES OF AGING BIOLOGY PERTINENT TO THE CARDIOVASCULAR SYSTEM
As the aging process begins after maturation, deteriorative, regenerative, and compensatory changes develop over time and result in diminished physiologic reserve capacity and an increased vulnerability to challenges, and, as a result, a decrease in the ability to fully recover from and survive challenges (resilience). Importantly, aging itself does not result in disease; however, it does lower the threshold for the development of disease and can intensify and accelerate the effects of the disease once initiated. The increased vulnerability with age to external or internal challenges is one of the tenets of geriatrics and gerontology.
These concepts are particularly relevant to the aging of the human cardiovascular system, especially older persons living in developed countries. When studying normal aging in these populations, it is essential to consider screening for clinical and subclinical disease, particularly atherosclerosis, and consider the impact of cultural and environmental factors and social determinants of health that are distinct from aging yet can mimic aging effects. These can manifest in human population studies as cohort and period effects, subtle or overt, and easily confused with aging.
For example, numerous observational studies indicate that blood pressure
Cardiovascular System
SECTION A
increases with aging. However, recent studies comparing age–blood pressure associations over a lifetime (to age 60) in westernized versus non- westernized Amerindian communities, the Yanomami and the Yekwana, from remote rainforests in Venezuela, suggest the strong association between age and blood pressure may instead be due to diet and lifestyle. There is an age- associated increase in BP among individuals from the Yekwana community, who have been exposed to western lifestyle, but not in the Yanomami community, who are largely hunter-gatherers-gardeners and have remained isolated from western lifestyle influences. It has been proposed that a true age-related change should be absent in young persons, increase with age, be universally present in very old persons, and not be related to any known, definable disease.
Learning Objectives
Understand the effects of normal aging on cardiac and vascular structure and function.
Describe the effects of normal aging on the anatomy and physiology of the heart and vasculature.
Understand the possible implications of the age-related changes in resting cardiovascular function.
Understand the role of age-related changes on lowering the threshold for clinical disease.
Key Clinical Points
Normal aging is accompanied by substantial alterations in the anatomy and physiology of the heart and vasculature.
There are declines in most cardiovascular function aspects, which create significantly reduced reserve capacity, which becomes more apparent during exercise and stress.
Many of the age-related changes may lower the threshold for clinical disease and predispose to various cardiovascular disorders in older people.
Awareness of the principles of aging biology, in general, will help clinicians tailor intelligent treatments to older patients.
Describe the effect of age on the cardiovascular response during exercise.
Age-related declines in cardiovascular and exercise performance
5. have been shown to be partially preventable and reversible with exercise training. Thus, maintaining regularly scheduled physical activity is an important strategy to mitigate the adverse effects of aging on cardiovascular function.
In some early human aging studies, individuals with clinical and subclinical diseases were not excluded, leading to an overestimation of the effects of aging on the cardiovascular system. Coronary atherosclerosis is highly prevalent in western societies and is an important disorder that can be occult and can significantly affect cardiac function. Systemic arterial hypertension is even more common. Therefore, reasonable screening for these two most common disorders is prudent to separate aging from disease.
In addition to the effects of subclinical disease, there are additional effects of physical inactivity. Humans and many animals become increasingly sedentary as they age. For example, rats given free access to a running wheel will run 20 km/week when they are young, but this decreases to less than 7 km/week when approaching the age of 23 months. Many older people are even less active, with Americans older than 70 years on average engaging in less than 10 minutes per day of physical activity. Another increasingly important lifestyle-related factor relatively new to civilization is obesity.
Adipose tissue owing to excess caloric intake has numerous adverse effects involving nearly all physiologic systems, including cardiovascular, and obesity increases substantially with age. Thus, the changes seen in an older population reflect the combination of all these factors, period, cohort, lifestyle, disease-related changes, and the biological effect of age itself. It is often challenging to precisely separate and discern, both qualitatively and quantitatively, the latter from the former. However, awareness of the important nuances of normal aging can help avoid most errors.
AGING CHANGES IN THE HEART
Substantial changes occur with aging in myocardial composition, cardiac structure, and cardiovascular function at rest and during exercise. The changes in anatomy are summarized in Table 73-1.
TABLE 73-1 ■ AGE-RELATED CHANGES IN THE ANATOMY OF THE HEART
Cellular Changes of the Aging Heart
Myocyte hypertrophy and degeneration Cardiomyocyte hypertrophy has been recognized as part of the response to the arterial changes and increased afterload described below. However, this should be interpreted in light of the evidence that the heart is renewing itself, continuously repopulated from resident stem cell populations and/or those from the bone marrow. Age- associated cardiomyocyte hypertrophy may mark depletion of the process, as the youngest cells, those most recently differentiated into cardiomyocytes, are thought to be the smallest, and in mouse hearts, myocyte size heterogeneity increases dramatically with age. Interestingly, the largest cells are also the most vulnerable to stress.
The loss of myocytes with age is greater than the ability to repopulate the heart. This loss is due to aging-induced oxidative stress and mitochondrial damage that trigger cardiomyocyte death, including necrosis, apoptosis, and autophagy. The exact mechanisms of oxidative stress-induced aging are still not precisely known. Increased reactive oxygen species lead to cellular senescence, which may stop cellular proliferation in response to damage.
The total number of cardiomyocytes may be reduced by 50% in healthy human and animal hearts across the lifespan. Those remaining cardiac myocytes are increased in size and are much more variable in size. Nearly universal findings in hearts from older individuals are focal basophilic
degeneration resulting from abnormal glycogenolysis and lipofuscin, a “wear-and-tear” pigment, which results in a macroscopic darkened appearance of the aged myocardium. Lipofuscin occupies up to 10% of myocyte volume in very old hearts. Each mitochondrion has its own genome, with a relatively sparse ability to correct mutations. Several investigators find mitochondrial DNA deletions may increase with age. The implications of this finding remain uncertain since there are approximately 1000 mitochondria per myocyte, and there is evidence of active mitochondrial quality control mechanisms, which may also be altered with age.
Nowhere is cellular dropout more impressive than in the sinoatrial node, decreasing the sinoatrial node volume with age. The number of pacemaker cells is reduced (90% by the age of 70), with most volume replaced by fat.
More modest cellular losses occur at the atrioventricular (AV) node, and minimal changes occur in the distal conduction system. The dropout of sinoatrial nodal cells is accompanied by a decrease in the slow, L-type calcium channel critical to the initiation of depolarization. Although the density of the L-type Ca2+ channels does not seem to be affected by age, the function seems to decline: a reduction in Ca2+ transient amplitude and slower channel inactivation has been associated with aging. The sensitivity of the older sinoatrial node to calcium channel blockers appears to increase, as assessed in the older guinea pig pacemaker.
Alterations in myocyte calcium homeostasis and active relaxation Older cardiomyocytes are intrinsically stiffer. In isolated papillary muscles from older rat hearts, a change in the pattern of contraction and relaxation is seen: slower force generation and slower relaxation with no change in peak force. The inotropic and lusitropic (facilitating relaxation) responses to sympathetic stimulation are also decreased with age. Calcium fluxes dictate cardiac contraction and relaxation. For contraction, a small amount of calcium enters the cells via the slow L-type calcium channels stimulating the release of 10- to 20-fold more calcium from the sarcoplasmic reticulum (SR), permitting actin and myosin to generate force. Active relaxation includes the calcium reuptake by the cardiac SR after contraction and extrusion from the cell by the Na-Ca exchanger and the SR Ca-ATPase (SERCA) pump. SERCA hydrolyzes ATP
to translocate Ca2+ from the cytosol back into the SR, allowing relaxation of the cardiac muscle. In the young heart, 90% of calcium cycles in and out of SR. Aging reduces the capacity of SR to accumulate, retain Ca2+, and inhibit
excitation-contraction coupling in the cardiomyocytes by interfering with the calcium transient. Calcium reuptake into the SR is decreased by almost 50% in old hearts from rats and mice, and the content of SERCA is decreased in old human hearts as well. Concurrently, the old SR has enhanced calcium leak manifested by small spontaneous localized releases called calcium sparks. All these impede cardiac relaxation, perhaps increase diastolic calcium concentrations, and result in smaller Ca stores in the SR for release in the next contraction. To a small extent, compensation occurs in other calcium fluxes in that the SR Ca-ATPase activity is increased in old rat hearts. Gene therapy, increasing the SR Ca-ATPase, has improved the function of old rat hearts.
Connective tissue fibrosis and scarring Age-related cardiac fibrosis reflects the net result of multiple pathways modulated by natriuretic peptides, neurohormonal drive, endothelin (ET) effects, reactive oxidation species, inflammation, advanced glycosylation end products, hemodynamics, and other influences, many of which will be subject to polymorphic genetic variation. Diffuse foci of fibrosis are seen microscopically in the myocardium owing to an increase in interstitial collagen, a delicate pattern, unlike the patches of fibrosis seen after acute injuries, such as after myocardial infarction. Age-related fibrosis does not appear to require either ischemia or hypertension, although both disorders accelerate the process.
Quantitatively, collagen content approximately doubles in the old heart as measured by magnetic resonance imaging (MRI). The collagenous weave is thicker and more cross-linked, conferring greater rigidity to the myocardium. Aging may produce a shift in the balance between matrix metalloproteinases (MMPs) and tissue inhibitors of MMPs that ultimately translates into increased matrix accumulation. Increased age-related fibrosis has been found in the cardiac conduction system (the SA node, the AV node, the His bundle, and the left bundle branch) and left ventricular (LV) tissue. These changes may partly underlie age-related alterations in diastolic filling. In addition, the proliferation of cardiac fibroblasts and collagen deposition in the atria with age will affect the myocardium’s electrophysiological properties. Atrial fibrosis might lower the threshold for the development of atrial arrhythmias.
The association between healthy aging and myocardial fibrosis has been a matter of debate. In a small study, 32 healthy volunteers underwent cardiac MRI-based assessment of myocardial extracellular volume (ECV); older age was associated with greater myocardial fibrosis. These observations are
consistent with animal studies by MRI and histological assessments. However, these findings have not been seen uniformly. In a subset of 314 healthy individuals from the Multiethnic Study of Atherosclerosis (MESA) cohort, the degree of myocardial fibrosis, as assessed by ECV, and myocardial scar burden, was not associated with aging.
Senile amyloid deposition Another histopathological change found in cardiac tissue of older adults is amyloid deposition. Senile cardiac amyloid deposition is seen to varying degrees in the majority of hearts from persons older than 90 years with a prevalence greater than 90% but is uncommon before age 60. It is easily recognized at autopsy, particularly along the left atrial (LA) endocardium. Its physiologic and clinical significance are incompletely understood, but it might contribute to LV diastolic stiffness. In some cases, amyloid deposition occurs at a level that leads to the progressive development of heart failure (HF). This infiltrative cardiomyopathy is defined as systemic senile amyloidosis (SSA). SSA is far less common than atrium-restricted amyloidosis.
Epicardial adiposity and intramyocardial fat deposition Aging affects all organ systems and alters body composition. Typically, fat mass increases with age and peaks at age 60 to 75 years. With aging, adipose deposits, particularly in the right ventricular (RV) epicardium and the AV groove. This is most pronounced in women and the obese. These observations at autopsy correlate with the increase in epicardial and pericardial fat stripes that superficially mimic pericardial effusion on echocardiography. In the Framingham Heart Study (FHS), the incidence and size of clear echocardiographic spaces (fat stripes) in the pericardium increased with age in both posterior and anterior regions. The increase in adipocytes may reflect a loss of control of differentiation of resident stem cells. Emerging data suggest that this adipose may impair cardiac function—the cells are metabolically and hormonally active and can generate various factors, including cytokines. Increased pericardial fat has been associated with atherosclerosis and coronary calcification, risk of atrial fibrillation (AF), and HF, particularly HF with preserved ejection fraction (HFpEF).
Besides epicardial fat deposition, myocardial triglyceride content increases with aging, which is further accentuated by comorbidities such as diabetes and obesity. Aging-associated increase in myocardial triglyceride content may be related to reduced fatty acid oxidation in the aging heart. As noted in older individuals, greater myocardial triglyceride content is
associated with increased fatty acid intermediates in the myocytes that alter myocardial structure and function and lead to myocardial lipotoxicity and increased cardiomyocyte apoptosis. Clinically, this may manifest as impaired myocardial relaxation, reduced cardiac exercise reserve, and increased risk of HFpEF.
Neurohormonal signaling The two main pathways are the renin-angiotensin- aldosterone system (RAAS) and β-adrenergic signaling. RAAS plays an important role in regulating blood volume and systemic resistance. Several studies have revealed similarities between angiotensin II–treated heart and the aging heart, suggesting that angiotensin II may play a role in cardiac aging. These similarities consisted of the development of cardiac hypertrophy, fibrosis, and diastolic dysfunction. Neurohormonal signaling also involves β-adrenergic receptors. These receptors regulate heart rate, myocardial contractility, and ventricular structural remodeling after stimulation by catecholamines. With aging, circulating catecholamine levels increase, leading to uncoupling of β-adrenergic receptors from their effector, adenylyl cyclase. This explains the reduced β-adrenergic responsivity observed with age.
Changes in Cardiac Structure
Left ventricular mass Seminal autopsy studies from subjects aged 20 to 99 without a history of hypertension or coronary atherosclerosis demonstrated that mean heart weight indexed to body surface area was not associated with age in men but increased with age in women. The interaction between age and gender has also been confirmed in other autopsy studies using 2D-guided M-mode echocardiographic measurements of LV mass and in the Cardiovascular Health Study (CHS), an NHLBI-funded population-based, observational cohort study of 5000 older adults. Recent studies evaluating cross-sectional and longitudinal associations between aging in healthy individuals (without cardiovascular disease [CVD] including hypertension, diabetes, smoking) and LV mass using echocardiographic and cardiac MRI examinations have demonstrated no significant changes in LV mass with aging in men and women, particularly after accounting for body size. Taken together, there are modest effects of age on LV mass, with no change to a slight reduction in LV mass noted with aging, particularly in middle-aged or older individuals.
Left ventricular and atrial size Changes in LV cavity size with aging have been a matter of debate, with some cross-sectional studies demonstrating a decrease in LV internal diameter in systole and diastole with aging while others showing no change to an increase. Cross-sectional analyses from the Baltimore Longitudinal Study of Aging (BLSA) using MUGA scan-based LV size assessment suggested increasing LV end-diastolic volume with aging in men but not women. However, these observations have not been confirmed in other studies, and a decline in the LV end-diastolic volume with aging has been demonstrated in a cohort of 104 healthy volunteers who were rigorously screened to exclude prevalent CVD. Larger cohort studies using cardiac MRI or echo-based assessment of LV parameters demonstrated a decline in LV end-diastolic volume with aging in cross-sectional as well as longitudinal analysis with repeated follow-up assessments.
Most echocardiographic and autopsy studies have found a significant age-related increase in LA size in subjects without apparent CVD, with an increase in LA dimension between ages 30 and 70. However, frankly increased LA volume before age 70 may reflect disease, whereas after age 70, increased LA volume can occur from aging alone. The mechanisms of
this age-related increase in LA volume are unknown but may be related to the age-related alterations in diastolic LV function. Serial echocardiographic measurements of LA size in humans have indicated that age and disease have additive effects on increases in LA size over time. Some have suggested an assessment of LA size to evaluate the presence of HFpEF. However, this is likely confounded by the effects of aging alone. While LA size appears to reflect chronic elevations in LV end-diastolic pressures, it does not discriminate whether this is due to systolic or diastolic dysfunction or restriction from pericardial or infiltrative processes. Therefore, LA volume may not be helpful in discriminating between the types of cardiac dysfunction that cause the elevations in pressure or volume. However, age-related LA dilation likely has consequences for specific disorders common in older adults, such as AF. Further, in population-based cohorts, LA size is significantly associated with the age-adjusted risk for stroke and death in both sexes.
Left ventricular wall thickne ss and geometry In the large autopsy study described earlier, RV and LV free wall thicknesses remained relatively constant with age, while ventricular septal thickness increased with age for both men and women, as shown in Figure 73-1. Wall thickness measurements at autopsy
may not correlate well with those made in living individuals when measurements can be made in systole and diastole. However, most echocardiographic studies of healthy subjects confirmed autopsy-based findings showing mild age-related increases in ventricular septal and LV free wall thickness in women and men.
FIGURE 73-1. Ventricular wall thickness. Index mean ventricular wall thickness versus age in normal hearts from 765 adults. (Modified with permission from Kitzman DW, Edwards WD. Age-related changes in the anatomy of the normal human heart. J Gerontol. 1990;45[2]:M33– M39.)
A frequent finding at autopsy and on echocardiograms of persons without apparent heart disease is the mild disproportionate thickening of the basal ventricular septum. This has been called sigmoid ventricular septum and senile septum and can confound to some degree the diagnosis of hypertrophic cardiomyopathy in older patients. The septal thickening may reflect hypertension rather than biological aging.
Due to increasing LV mass and free wall thickness and shrinking LV size, prior studies among healthy individuals demonstrated increasing relative wall thickness concentricity (mass/volume ratio) with aging. In more recent studies with MRI-based assessments, changes in relative wall thickness with aging have been less uniformly described. In a cross-sectional analysis from healthy individuals from MESA, the mass/volume ratio increased with aging in women but not men. In healthy individuals from the FHS, MRI-relative
wall thickness did not increase with age in cross-sectional analysis. Among healthy Coronary Artery Risk Development in Young Adults (CARDIA) study participants, longitudinal echocardiographic assessment in young adulthood and middle age (20 years apart) did not show significant increases in relative wall thickness. These data suggest age-related changes relative to wall thickness are modest and may occur after middle age.
Valve s The cardiac valves undergo several age-related changes. When measured at autopsy, the thicknesses of normal aortic and mitral leaflets increase, particularly along the closure margins. This is associated microscopically with collagen deposition and degeneration, lipid accumulation, and focal dystrophic calcification in the leaflets and annuli.
In those subjects most affected, this is recognized clinically and echocardiographically as aortic valve sclerosis, valve thickening without significant hemodynamic dysfunction. In the CHS, aortic sclerosis was found in 26% of participants, associated with male gender and hypertension. The relationship between age-related degenerative changes and the development of clinical aortic stenosis is incompletely defined, but aortic sclerosis is independently associated with a 1.5-fold increased risk of cardiovascular mortality, calling into serious question whether this should be considered a normal age-related change.
Age-related degenerative calcification of an otherwise normal-appearing tricuspid aortic valve may result in progressive aortic stenosis, the most common cause of aortic stenosis requiring valve replacement. The relationship between near-universal age-related thickening and mild calcification of the aortic valve leaflets and the development of degenerative calcific aortic stenosis is unclear, but the lack of efficacy of statins in modifying this natural history suggests that typical atherosclerosis is unlikely to be the driver.
The mitral annulus develops microscopic calcium deposits with aging, but gross mitral annular calcification is likely a disease process. Relatively little is known about the pathophysiology or natural history of mitral annular calcification. It is present in up to 40% of hearts from women older than 90 years with a large (4 to 1) female predominance. It is often associated with AV block and bundle branch block and modest mitral regurgitation but rarely with significant mitral stenosis.
The circumferences of all four cardiac valves, measured at autopsy, increase with age in normal hearts from women (Figure 73-2) and men and
are associated with collagen degeneration and lipid accumulation in the valve annuli. This is most notable for the semilunar (aortic and pulmonary) valves than the AV (mitral and tricuspid) valves. In the case of the aortic annulus, this normal age-related dilatation has been confirmed in living subjects with echocardiography.
FIGURE 73-2. Normal indexed mean cardiac valve circumferences versus age. Results in 392 women. (Modified with permission from Kitzman DW, Edwards WD. Age-related changes in the anatomy of the normal human heart. J Gerontol. 1990;45[2]:M33–M39.)
Annular dilatation likely contributes to the age-related increase in valvular regurgitation documented in healthy, normal, asymptomatic subjects. By the age of 80, 90% of apparently healthy subjects had multivalvular regurgitation; the aortic valve was affected earliest and to the greatest extent. The degree of valvular regurgitation caused by normal aging is always trivial or mild, central, and associated with normal (for age)-appearing leaflets.
Age is the strongest risk factor for isolated severe aortic regurgitation. The idiopathic dilatation of the aortic annulus is the most common cause of aortic regurgitation in patients undergoing aortic valve surgery. This disease may exaggerate the age-related degenerative change with additional contributing factors as yet unidentified.
Pericardium Wavy bands of collagen bundles comprise the normal pericardium. The straightening of these wavy bands allows a degree of distensibility when
pericardial pressure or volume increases acutely. With aging, these collagen bands become straighter, and the pericardium becomes thicker, and the pericardium of older subjects becomes stiffer. The significance of this is unknown, but it could impact diastolic compliance in older adults. As discussed earlier, the degree of epicardial and pericardial fat increases with age, particularly in women and obese persons.
Atrial septum The atrial septum thickens and becomes stiffer with age, probably owing to fatty infiltration and fibrosis. The atrial septum becomes less mobile with phasic respiration. If on echocardiography, a thin, hypermobile atrial septum is seen in an older person, then an atrial septal aneurysm (which often is accompanied by fenestrations), patent foramen ovale, or atrial septal defect should be suspected and prompt further evaluation with color Doppler and peripheral venous injection of agitated saline contrast.
An exaggerated form of the age-related fatty infiltration of the atrial septum is found almost exclusively in older adults and is called lipomatous hypertrophy. It can mimic an intracardiac tumor but is recognizable by its characteristic dumbbell shape.
A patent foramen ovale is seen in approximately 35% of normal hearts younger than 30 years and in 20% at age 80. The lower prevalence of patent foramen ovale is accompanied by increased patent foramen ovale size in older individuals. While paradoxical embolism is usually considered when an atypical stroke occurs in a person younger than 55 years, it can contribute to strokes among older adults. Because of this, injection of venous-agitated saline contrast is often used as an adjunct to echocardiographic imaging even in older patients referred with atypical stroke.
Coronary arteries With aging, the coronary arteries become more dilated and tortuous, possibly because of hemodynamic drag. Coronary collaterals may increase in number and size with age, but this may reflect atherosclerosis. While atherosclerosis is a disease process, Mönckeberg medial calcification (arteriosclerosis) probably represents an age-related degenerative process. It is nearly universally found in the very old independent of gender. In the peripheral vasculature, it contributes to the age-related elevation in systolic blood pressure and arterial stiffening. Often seen in older patients and those with end-stage renal failure is the triad of cardiac calcifications (aortic cusps, mitral annulus, and coronary arteries), called the senile calcification syndrome. In these older persons, calcium metabolism is unaltered and,
although a relationship with elevated serum cholesterol levels has been described, the etiology is unknown. These age-related changes could contribute to the loss of specificity of coronary calcium score in persons older than 75 years.
Overall Appearance
A characteristic geometric configuration is imparted to the older heart by these age-related changes, particularly those observed in the cardiac chambers: shortening of the long-axis dimension, a mild decrease in the internal systolic and diastolic LV cavity dimensions, dilatation, and rightward shifting of the aortic root, and dilatation of the left atrium as shown in Figure 73-3. These changes, plus mild regional calcification in the aortic and mitral valve annuli, are so characteristic that they serve as clues to help detect the age group of patients during blinded echocardiogram readings.
FIGURE 73-3. Age-related changes in the cardiac chambers. A. Normal heart from an 18- year-old for comparison (left ventricular long-axis views). B. Normal heart from an 84-year-old man demonstrates shortening of the base-to-apex (long-axis) dimension, decreased internal left ventricular dimension, aortic root dilatation with rightward shift, sigmoid-shaped septum, and left atrial dilatation. (Reproduced with permission from Bradenburg R, Fuster V, Giuliani ER, et al. Cardiology Fundamentals and Practice. Chicago, IL: Year Book Medical Publishers; 1987.)
Changes in Cardiac Function with Age at Rest
Since there are significant changes in the anatomy of the cardiovascular system, one would expect alterations in cardiac physiology as well. Several important age-related changes have already been discussed briefly earlier, including changes in valvular function and the potential anatomic substrates for impaired diastolic function. While the effect of age on cardiac function has long been a research topic, only recently have studies been performed using adequately robust techniques combined with appropriately screened reference populations. However, it is still true that little information is available regarding how these changes in function impact the epidemiology, presentation, diagnosis, prognosis, and therapy of CVD. Changes with age in cardiovascular function are summarized in Table 73-2.
TABLE 73-2 ■ AGE-RELATED CHANGES IN CARDIOVASCULAR PHYSIOLOGY
Most earlier studies of cardiovascular function at rest show either no substantial change in cardiac output, stroke volume, heart rate, and ejection fraction with aging or mild-to-moderate and significant increases in systemic and pulmonary arterial blood pressure, with resultant increases in left and right ventricular afterload. However, in a cohort of healthy, well-screened individuals who underwent detailed resting and exercise hemodynamic and radionuclide assessment of LV volumes, there was a significant cross- sectional association between older age and smaller LV size, higher LV
filling pressure, and lower stroke volume at rest, suggesting alterations in Frank-Starling mechanisms.
Heart rate and rhythm There is no change in resting heart rate with healthy adult aging (Figure 73-4). However, as will be discussed in the section on exercise, there is a clear and marked decrease in maximum heart rate in response to exercise that is highly predictable and can easily be estimated by a simple equation. For healthy adults, the equation (208 – [0.7 × age]) predicts the maximum heart rate for exercise testing.
FIGURE 73-4. Correlation between age and resting measures of (A) diastolic function assessed invasively using pulmonary capillary wedge pressure, (B) heart rate, (C) systolic function assessed as ejection fraction, and (D) left ventricular end-diastolic volume among healthy, community-dwelling volunteers without cardiovascular disease. (Reproduced with permission from Pandey A, Kraus WE, Brubaker PH, et al. Healthy aging and cardiovascular function: invasive hemodynamics during rest and exercise in 104 healthy volunteers. JACC Heart Fail. 2020;8[2]:111–121.)
The age-related change in maximum heart rate is perhaps the most substantial change in cardiac function, both in magnitude and in consequence. Although its mechanism(s) are not fully understood, several studies have been performed. In the presence of the β-adrenergic antagonist, propranolol, and the parasympathetic antagonist, atropine, ablating both sympathetic and parasympathetic input to the heart, the intrinsic heart rate is seen. Intrinsic heart rate decreases by 5 to 6 beats/min each decade of age so that the resting
heart rate in an 80 years old is not much slower than the intrinsic heart rate. At rest, the parasympathetic nervous system is minimally slowing the heart. As would be expected, the increase in heart rate after atropine is less in older adults than the young.
There is also decreased response to sympathetic agonists. Administration of sympathomimetic agents to healthy young and old adults demonstrated chronotropic effects were attenuated in the old. At doses that increased heart rate by 25 beats/min in young males, heart rate increased only 10 beats/min or less in older adults.
Supporting the decline in maximal heart rate as a primary age-related biological change is that it is not modified by vigorous exercise training; it is not a consequence of reduced physical activity. Also, it does not appear to reflect inadequate sympathetic stimulation, as plasma norepinephrine levels are increased, not decreased, at rest in normal older adults. Further, norepinephrine increases even more with exertion in older than in young persons under similar stress.
Perhaps as a direct reflection of the decreased parasympathetic nervous system input and decreased responsiveness to autonomic input, there is a significant decrease in heart rate variability. Heart rate variability measures the variations in instantaneous heart rate (or RR interval) over time. Loss of variability results in decreased complexity, which correlates with the decrease in physiologic reserve. Any loss of complexity may then render the older individual less likely to tolerate challenges to their homeostasis.
Furthermore, the loss of complexity with age occurs in a number of physiologic systems and is forestalled by interventions like exercise training.
In highly screened older adults, to exclude potential confounding effects of disease, the prevalence of atrial premature beats (APBs) reaches 88% on 24-hour ambulatory monitoring. Because there is no association with cardiac risk over the next decade with the presence of APBs, they are not thought to reflect subclinical coronary artery disease. At exercise testing, isolated ventricular ectopic beats occurred in more than half of highly screened adults older than 80 years. Therefore, the increases in ectopy of both atrial and ventricular origin are considered normal aging processes.
Diastolic function Increased LV stiffness associated with aging using invasive techniques was first described in young and old beagles. Ten years later, similar findings were identified by invasive techniques in humans. The advent of spectral Doppler echocardiography in between these two
developments greatly expanded the ability for noninvasively assessing LV diastolic filling. All studies, including the large population-based databases from the FHS and the CHS, have uniformly found that diastolic LV filling is substantially altered in older normal adults. In addition, similar changes with aging are found in monkeys, rats, dogs, and mice.
The age-related changes in diastolic LV filling patterns—measured by reduction in early diastolic LA emptying, increased late diastolic emptying from atrial contraction, and increase in isovolumic relaxation time on pulsed Doppler echocardiography—have been confirmed in noninvasive human studies. The echocardiographic indexes of diastolic filling may be altered early in the course of various disorders that are common and sometimes unrecognized in older adults. A number of physiologic variables significantly influence them. Thus, it had been questioned whether the age-related alterations in Doppler diastolic filling indexes were simply secondary to these or whether they occurred independently of CVD and other confounding physiologic variables. However, physiologic studies with invasive and noninvasive hemodynamic assessments in old and young healthy volunteers rigorously screened for CVD have confirmed that an alteration in diastolic LV filling pattern is a primary, biologic effect of aging, intrinsic to the aged human heart, and not explicable by other physiologic and pathologic changes that frequently accompany the aging process.
Since normal healthy older individuals are expected to have an altered Doppler LV filling pattern, what should be considered abnormal? Data from echocardiographic assessments in older healthy adults without CVD included in large community-based cohort studies such as the CHS and FHS have yielded important data informing the normative range for diastolic function parameters in older men and women. Thus, Doppler diastolic filling patterns within this range should be considered normal in patients in this age range.
Accordingly, it is preferable to use the term “delayed relaxation” or “normal for age” in clinical descriptions of this finding, rather than the terms “impaired relaxation” or “abnormal relaxation,” which denote abnormality and are inconsistent with aging principles. In addition, findings obtained in older patients during basal conditions that fall outside these ranges should be considered abnormal, regardless of age. Second, the pattern of LV filling is helpful. Certain patterns, such as the pseudonormalized and restrictive patterns can be more easily discerned from normal and can be more specific for disease when found in older than in younger patients because these differ
more from the expected pattern. Mitral annulus tissue Doppler has significantly boosted the ability to assess LV diastolic function noninvasively because the annular velocity measures are relatively load-dependent. As would be expected, based on the above study, age alters the tissue Doppler velocities as well. Unfortunately, age-related normative reference data are relatively sparse.
The age-related differences in LV diastolic function and LV compliance have also been assessed by invasive hemodynamic studies. Among healthy, community-dwelling individuals age 20 to 76 well-screened for CVD, a cross-sectional assessment demonstrated higher pulmonary capillary wedge pressure and smaller LV size with increasing age suggesting worse diastolic function and alteration in LV relaxation (Figure 73-4). Some studies have attributed the age-related difference in LV diastolic function to greater intrinsic LV stiffness with a slowing in LV relaxation in early middle age and a significant reduction in diastolic LV function after the age of 65. Other studies have questioned the contribution of impairment in LV relaxation toward the age-related decline in LV diastolic function and implicated changes in the LA properties.
Age-related alterations in LV diastolic function are also evidenced by an atrial gallop (S4) physical examination findings in those older than 75 years. An atrial gallop is a manifestation of the increased contribution of LA systole to ventricular filling. The decrease in rapid cardiac relaxation during early diastole results in increased dependence on LA systole in late diastole for adequate LV cardiac filling. However, as things worsen, LA pressures increase, and early filling subsequently increases again. This pseudonormalization of LV filling is another marker that the aging process has tipped over to HF.
The age-related changes in diastolic function can also be modified and improved by exercise behavior and cardiorespiratory fitness levels. In old rats trained on treadmill exercise for 1 to 2 months, SR calcium uptake and cardiac relaxation improved to that seen in young sedentary rats. In mechanistic hemodynamic studies among humans, older individuals with greater lifetime exercise exposure have been shown to have more favorable LV diastolic function than sedentary individuals. Furthermore, recent exercise training studies in human participants have also demonstrated significant improvement in invasive measures of LV diastolic stiffness with intense, long-duration (up to 2 years) exercise training in middle-aged but not
older age adults. Humans on caloric restriction diets have better diastolic function than age-matched controls, corroborating experiments in experimental animals. While this approach may not be highly practical, only 5 years of caloric restriction is needed to produce the change. Furthermore, intentional weight loss interventions have also demonstrated significant improvements (~20% reduction) in invasively assessed left-sided filling pressures. Taken together, these findings suggest that the age-related diastolic impairment may be modifiable, particularly, early in the course of the aging process, with lifestyle interventions.
Systolic function In healthy humans, no age-related changes in measurable, overall LV contractility, assessed at rest by the ejection fraction, fractional shortening, or mean velocity of circumferential fiber shortening, have been reported. (Figure 73-4). Wall motion abnormalities should not be considered normal, even in very old adults. In the CHS, the prevalence of unexpected wall motion abnormalities, in the absence of history and symptoms of coronary heart disease, was 0.4% in women and 0.5% in men.
The contraction and relaxation of the older LV are not uniform. In older people, segments of the heart have started to relax while others are still contracting. As LV pressure must be low before filling can start, this prolonged contraction shortens the time available for filling to occur.
Aging alters several Doppler measures of aortic outflow. Aortic peak flow velocity, time-velocity integral, and acceleration are reduced with advancing age. While these hemodynamic factors relate to LV systolic performance, they are also substantially affected by afterload, which increases with aging.
Right ventricular structure and function Autopsy studies of the RV have demonstrated a progressive loss of myocytes and increased myocyte volume per nuclei suggestive of cellular hypertrophy with increasing age. However, the magnitude of cellular hypertrophy is insufficient to make up for the loss of RV mass, which declines significantly with aging. In human cohort studies, cross-sectional comparison of echocardiographic RV parameters across different age groups of healthy individuals, aging was associated with lower RV longitudinal systolic function as measured by the tricuspid annular plane systolic excursion. RV ejection fraction is relatively preserved with aging.
Furthermore, studies have also demonstrated a decline in RV relaxation and increased right atrial pressure, as assessed by echocardiography, with aging. Doppler indices, reflective of flow pattern, demonstrate a reduced early RV
diastolic filling, increased late filling, and reduced myocardial diastolic velocities. The abnormalities in RV systolic and diastolic function with aging have been attributed to increasing pulmonary artery pressure and RV afterload with aging, mostly secondary to increased pulmonary arterial stiffness and vascular resistance in the pulmonary vasculature. No significant differences in RV size were noted with aging in cross-sectional echocardiographic assessments.
Implications of the Age-Related Changes in Resting Cardiovascular Function Aging is associated with a decline in resting oxygen uptake driven by decreases in cardiac output. The decline in cardiac output is related to the reduction in stroke volume. The resting peripheral oxygen extraction has been shown to increase with aging; however, its clinical significance is not well established (Figure 73-5).
FIGURE 73-5. Correlation between age and resting measures of (A) oxygen uptake, (B) cardiac index, (C) stroke volume index, and (D) peripheral oxygen extraction among healthy, community-dwelling volunteers without cardiovascular disease. (Reproduced with permission from Pandey A, Kraus WE, Brubaker PH, et al. Healthy aging and cardiovascular function: invasive hemodynamics during rest and exercise in 104 healthy volunteers. JACC Heart Fail. 2020;8[2]:111–121.)
Aging decreases one’s ability to tolerate challenges to homeostasis. This is most evident in the cardiovascular system. For example, the mortality and probability of developing HF after a myocardial infarction increase dramatically with age. While clearly, the pathogenesis of atherosclerosis and the myocardial infarction itself is not normal aging, the response to the systemic challenges produced by the infarction may well be impaired because of the aging process. Consistent with this, there is an age-related increase in mortality after experimental infarction in mice and rats. We suggest that homeostenosis, the depletion of reserves, may be the cost of invoking compensatory mechanisms to maintain homeostasis.
Similarly, acute hypertension is poorly tolerated in the old. Old (18 months) and adult (9 months) rats had afterload increased by constriction of the aorta. Immediate early response gene signals were attenuated in the old rats. Decreased skeletal actin expression after pressure overload was present, and skeletal actin expression precedes cardiac actin expression in most hypertrophy models. Atrial natriuretic peptide (ANP) stimulates the excretion of water and sodium by the kidney. The atria only express ANP in normal young hearts, but ANP is a marker of stress and compensation when seen in the ventricles. ANP is elevated in the ventricles at baseline in the old rat and could not be further stimulated after additional stress. This suggested that the hypertrophy response was already invoked as part of aging in the older rats and was less available to respond to acute stress.
While the normal heart is unlikely to ever be exposed to ischemia, ischemic preconditioning is an adaptation of the young heart that is not present in the old heart. If repeatedly exposed to brief episodes of ischemia, young hearts tolerate longer episodes well with less resultant damage by increasing heat-shock protein levels, opening ATP-gated potassium channels, stimulating the tumor necrosis factor-alpha (TNF-α) cascade, and activating antioxidant enzymes. Old hearts cannot make this adaptation, perhaps contributing to the increased mortality after myocardial infarct in the old.
However, exercise training, caloric restriction, and certain growth factors may restore this adaptive capability.
The responsiveness is decreased to some cardioactive drugs, including atropine, dobutamine, and other β-adrenergic active agents, as noted above. These agents may require higher doses to reach a desired effect in the old. HF becomes increasingly common, reaching a prevalence of more than 10% and being the most common reason for hospitalization of Medicare
beneficiaries. The syndrome of HFpEF, the most common form among older persons, is likely facilitated by the above- and below-discussed age-related changes in diastolic function, myocardial composition, and vasculature added to the arterial and myocardial changes caused by hypertension and other diseases. Findings from large epidemiological cohort studies have shown that risk of HFpEF increases with age.
Furthermore, age-related decline in exercise capacity and diastolic function are important predictors of HFpEF development. Age-related changes in vessels and the heart do not by themselves produce disease, but because of the changes in compliance, systolic hypertension is common.
Finally, these changes make the old cardiovascular system more prone to decompensation in response to other insults.
Effect of Age on the Cardiovascular Response During Exercise
If aging affects cardiovascular performance even at rest or with moderate stress, one would expect this to be magnified and become even more apparent during exercise. This is indeed the case. Exercise capacity can be quantified objectively by measurement of maximal oxygen consumption (VO2max) during exercise. It is solidly established that a reduction in
VO2max inescapably accompanies normal aging. While the age at which this decline begins is unclear, it is probably variable and begins early in adult life. The reduction in VO2max is independent of gender and changes in body
size. The magnitude of the decline is approximately 3% to 8% per decade, the rate of decline increasing with each decade and can be modified but not wholly halted or reversed by exercise training.
Initial studies from the BLSA in the 1980s had suggested a relatively small (~3%) decline in VO2max with aging attributed largely to loss of muscle mass with a modest decline in exercise cardiac function. However,
these findings were at substantial variance with other studies. At the time, the
difference compared with prior studies was attributed to rigorous screening. A subsequent report from the BLSA in 2005, which examined a large number of subjects, both sedentary and well-conditioned by training, during 8 years of follow-up, thereby providing true longitudinal rather than cross-sectional data, showed that, in actuality, the decline in exercise capacity (VO2max)
among older persons was more accelerated and greater in magnitude than all previous estimates, with the rate of decline accelerating from 3% to 6% per
decade till 40 years of age to more than 20% per 10 years among those older than 70 years. In addition, another subsequent report from the BLSA in 1995 showed that, in contrast to the original study in 1984, both men and women do indeed have substantial age-related declines in maximal exercise cardiac output, accompanying and contributing to a 40% decline in VO2max.
Similarly, in a recently reported study of 104 healthy, community-dwelling individuals aged 20 to 76 years who were rigorously screened for subclinical or clinical CVD, a 40% decline in VO2max was observed across
the six decades. This is in accord with reports from all other studies. Thus, there is now uniform agreement that aging, even in the absence of any identifiable disease, is associated with substantial declines in overall cardiovascular performance and reserve capacity, including maximal cardiac output (Figure 73-6).
FIGURE 73-6. Mechanisms of decline in peak exercise oxygen uptake with aging. The key drivers of decline in peak exercise oxygen uptake are largely reductions in peak exercise heart rate and peak exercise stroke volume, which is driven by alterations in Frank-Starling mechanisms and reduced left ventricular contractility at peak exercise. (Reproduced with permission from Pandey A, Kraus WE, Brubaker PH, et al. Healthy aging and cardiovascular function: invasive hemodynamics during rest and exercise in 104 healthy volunteers. JACC Heart Fail. 2020;8[2]:111–121.)
By the Fick principle for oxygen, only a limited number of factors could be responsible for a decline in VO2max. The following equations are
pertinent to this discussion:
VO2 = cardiac output × arteriovenous oxygen (A-VO2 diff) difference Cardiac output = stroke volume × heart rate
Stroke volume = end-diastolic volume – end-systolic volume
The A-VO2 is determined by a number of noncardiac factors, including peripheral vascular and skeletal muscle mass and metabolic function. Thus, if VO2max declines with aging, there must be a decline in peak cardiac
output or A-VO2 or both during exercise.
Measurement of cardiac output in healthy human subjects during exercise is challenging methodologically. Investigators have used various techniques, including direct Fick (probably the most reliable), dye dilution, equilibrium- gated radionuclide angiography, and gas rebreathing. Each of these methods uses multiple variables to derive the cardiac output measurement. Direct measurement of A-VO2 by oximetry, however, is quite accurate and reliable.
Most investigators who have measured A-VO2 during maximal exercise have documented no difference or increased A-VO2 in older compared with young
subjects. By simple algebra, this suggests that the age-related decline in VO2max must be because of reduced cardiac output. This has, indeed, been the finding reported by virtually all investigators. Accordingly, a decrease in
the inotropic (contractility), chronotropic (heart rate), and as well as
lusitropic (diastolic function) responses to dobutamine/exercise may all have a potential role in the age-related decline of VO2max (Table 73-3).
TABLE 73-3 ■ MEASURES OF CARDIAC PERFORMANCE AND LEFT VENTRICULAR DIMENSIONS AT REST, SUBMAXIMAL UPRIGHT EXERCISE (50 W), AND MAXIMAL UPRIGHT EXERCISE AMONG HEALTHY INDIVIDUALS ACROSS DIFFERENT AGE GROUPS
The age-related decline in VO2max appears to be driven primarily by reduced exercise cardiac output, stroke volume, and heart rate among older versus younger individuals, as shown in Figure 73-7. Specifically, the
reduction in peak exercise stroke volume was most notable among
individuals who were 60 years and older. Other studies using the direct Fick technique, dye dilution, or acetylene rebreathing to assess cardiac output have also demonstrated a decline in maximal cardiac output with aging. The primary mechanism of the age-related decline in exercise cardiac output is the age-related reduction in maximal heart rate. Reduced maximal exercise heart rate appears to be a universal observation and meets the basic biological aging phenomenon criteria. Future studies are needed to understand better the biological mechanisms underlying the age-related decline in maximal heart rate.
FIGURE 73-7. Correlation between age and peak exercise measures of (A) oxygen uptake,
(B) cardiac index, (C) stroke volume index, and (D) peripheral oxygen extraction among healthy, community-dwelling volunteers without cardiovascular disease. (Reproduced with permission from Pandey A, Kraus WE, Brubaker PH, et al. Healthy aging and cardiovascular function: invasive hemodynamics during rest and exercise in 104 healthy volunteers. JACC Heart Fail. 2020;8[2]:111–121.)
Although reduced maximal heart rate is the primary mechanism for reduced exercise cardiac output and oxygen consumption in older subjects, younger subjects in whom exercise heart rate is limited, either by congenital complete heart block or by a β-adrenergic blockade, stroke volume is increased and partially compensates for the reduced heart rate via the Frank- Starling response (increased end-diastolic volume). The effect of aging on the Frank-Starling mechanism and maximal stroke volume response to exercise depends on the age range examined. Specifically, studies limited to younger and middle-aged individuals (< 50 years) have failed to appreciate a significant decline in maximal exercise stroke volume with aging. The most recent and largest study of aging effects studied invasively in 104 well- screened, health healthy men and women reported by Kitzman and colleagues; there was a modest continuous decline in Frank-Starling from age 30 to age 80 (the oldest age studied), demonstrated by lower invasively measured end-diastolic and stroke volumes, lower LV ejection fraction, a trend toward higher pulmonary capillary wedge pressure during exhaustive upright exercise in the old (Figure 73-8).
FIGURE 73-8. Correlation between age and peak exercise measures of (A) diastolic function assessed invasively using pulmonary capillary wedge pressure, (B) heart rate, (C) systolic function assessed as ejection fraction, and (D) left ventricular end-diastolic volume among healthy, community-dwelling volunteers without cardiovascular disease. (Reproduced with permission from Pandey A, Kraus WE, Brubaker PH, et al. Healthy aging and cardiovascular function: invasive hemodynamics during rest and exercise in 104 healthy volunteers. JACC Heart Fail. 2020;8[2]:111–121.)
Furthermore, the decline in end-diastolic and stroke volumes and ejection fraction were noted at both submaximal and maximal exercise levels with aging highlighting a consistent alteration in the Frank-Starling mechanisms in response to exercise (Figure 73-9). A lower exercise stroke volume in older adults could be because of higher end-systolic volume or lower end-diastolic volume. LV end-systolic volume was higher, and ejection fraction was lower at peak exercise in the older subjects in most studies in which these were measured. Thus, systolic LV function reserve is reduced with aging as well.
Reduced stroke volume could also result partially from increased afterload since systolic blood pressure, aortic impedance, and systemic vascular resistance are higher during exercise in old than in young, healthy subjects. When afterload is taken into account, maximal stroke work is fairly similar in young and old subjects.
FIGURE 73-9. Differences in left ventricular end-diastolic volume and stroke volume changes in response to exercise across different age groups as observed in a healthy, community- dwelling volunteers without cardiovascular disease. (Reproduced with permission from Pandey A, Kraus WE, Brubaker PH, et al. Healthy aging and cardiovascular function: invasive hemodynamics during rest and exercise in 104 healthy volunteers. JACC Heart Fail.
2020;8[2]:111–121.)
When evaluating older patients for coronary heart disease, it should be recognized that in healthy older people, the LV ejection fraction does not increase as much from rest to peak exercise as it does in young, healthy subjects. In fact, a flat response in older men and a mild decline in older women should be considered normal. However, the development of wall motion abnormalities should be considered abnormal, even in the presence of a mild nonspecific decline in ejection fraction.
There has been less information regarding peripheral cardiovascular function with aging, including systemic arterial function, which is required to deliver oxygenated blood to working muscle efficiently and working muscle itself. Some studies have demonstrated a decline in exercise A-VO2 with
aging, while others have failed to observe this association. An invasive hemodynamic characterization of 104 healthy individuals at rest and peak exercise did not observe a significant association between A-VO2 at peak
exercise and age (Figure 73-7). However, a significant inverse association between age and A-VO2 reserve (change from resting to peak exercise) with a smaller absolute increase in A-VO2 from rest to peak exercise in older
versus younger individuals was reported suggesting that reduced peripheral oxygen extraction ability in older age may contribute, to some degree, to the age-related decline in VO2max (Table 73-3). In an invasive hemodynamic
study of old and young individuals without CVD, Beere et al. demonstrated that in addition to reduced peak exercise cardiac output, older men had reduced exercise leg blood flow. The study also repeated the detailed measurements of central and peripheral cardiovascular functions following exercise training. Their results confirmed the findings of a number of previous investigators that exercise training could improve VO2max by 15%
or more and thereby “reverse” some of the age-related declines in physical work capacity. Furthermore, they found that the primary mechanism of improvement in exercise capacity following training in older subjects was a large improvement in leg arterial blood flow.
Skeletal muscle function is another potential contributor to the age- related reduction in VO2max. There is a decline in skeletal muscle mass and increased fatty infiltration, a shift in fiber type, and variable alterations in
mitochondrial density and function with aging. Each of these could contribute
to reduced exercise capacity in older persons.
AGING OF THE VASCULATURE
Age-Related Changes in Arterial Structure
With age, a number of changes occur in the aorta, and all appear to contribute to increased stiffness (see Table 73-4). Elastin becomes fragmented in the internal elastic lamina and media, perhaps because of inappropriate activation of MMPs. The MMPs may also liberate proinflammatory signals such as NFκB. Calcification of the media is also seen. Collagen content increases and becomes increasingly cross-linked, making a stiff matrix, especially in the subendothelium.
TABLE 73-4 ■ COMPONENTS OF VASCULAR AGING
Irregularities in size and shape of endothelial cells are seen at areas of turbulence, and high cellular turnover occurring at those sites suggests replicative or cellular senescence may occur at those sites. Further evidence of this “in situ” replicative senescence may be provided by manipulations that inhibit telomere shortening. In endothelial cells with persistently long telomeres, age-associated abnormalities may be significantly reduced. In contrast, senescent endothelial cells have upregulated adhesion molecules, proinflammatory cytokines, and decreased NO production. Senescence of endothelial progenitor cells is associated with decreased angiogenesis and impairment of the complex vascular repair system intended to attenuate the chronic vascular injury and inflammation that may lead to atherosclerosis. In contrast to the endothelial senescence, pulsatile stretch stimulates vascular smooth muscle cell (SMC) proliferation and hypertrophy; SMCs may become increasingly polyploid with multiple sets of chromosomes. The proliferative SMCs may migrate from the media to the subintima.
Functional Changes of Aging Arteries
Multiple functional changes occur with aging in conduit arteries. For example, nitric oxide (NO) is a vasorelaxant and contributes to the balance that dictates resting arterial tone. Aortic strips isolated from older animals have higher NO synthase activity but produce less NO. Old aortas will relax
appropriately when exposed to direct NO donors (nitroprusside) but are less responsive to agents whose effects are mediated by NO, such as acetylcholine. Similarly, forearm arterial blood flow is increased less in older individuals in response to acetylcholine compared to younger and athletically active older individuals. Flow-mediated dilation (FMD), the increase in artery diameter in response to blood pressure cuff-induced ischemia, is attenuated with age. The decline in endothelium-dependent dilation of conduit arteries is related to vascular dysfunction and occurs later in the aging process than the increase in arterial stiffness. As FMD is essentially NO-dependent and NO is produced from circulating arginine, with age, a relative increase in arginase (a scavenging enzyme that competes for arginine) results in reduced arginine availability for the endothelium.
This explains, in part, the relatively poor efficacy of arginine supplementation. As the response to direct-acting agents, like nitroprusside, is unchanged by age, endothelial dysfunction must play a critical role. The integrity of the vessels is also dependent upon the endothelium and is less well maintained. Increased vascular permeability facilitates the transit of immune and inflammatory cells and signaling molecules into the vessel wall that stimulate MMP activity and, perhaps, atherosclerosis.
Tonically contracted arteries are partially due to an age-related increase in vasoconstricting (ETA) and loss of vasodilating (ETB) receptors for endothelin-1 and increased circulating levels of this potent vasoconstrictor. This results in a decreased maximum response to added endothelin in older persons. Exercise training appears to decrease basal endothelin-1 levels and restore its responsiveness.
Aging and the Microvasculature
The importance of the smallest blood vessels in age-related disease and dysfunction is increasingly recognized. The number of capillaries per volume of tissue (capillarity) is decreased, with aging in many organs, including skin, skeletal muscle, and brain. This is seen after a middle-age increase in capillarity in some tissues, perhaps compensation for metabolic demands.
Arterioles may play a larger role in oxygen and nutrient delivery in older age as compensation for the decreased capillarity. As arteriolar density is less homogeneous, this leads to disparities in oxygen available to regions of the aged brain and heterogeneous levels of oxygenation, including “hypoxic micropockets” clearly seen in the brains of awake, treadmill walking normal
old, but not middle age or young, mice. Alteration in endothelial function is found in the small vessels of the aging brain leading to vascular dysregulation similar to that as described for the arteries, but the added fine vascular regulation of neurovascular coupling is also impaired in aging. The altered endothelial function is one of many age-associated changes that lead to changes in the blood-brain barrier, resulting in selective increases in permeability. These changes have potential implications since they are associated with cognitive dysfunction. A generalized age-related decrease in collateral vessels that lowers the threshold for damage during ischemia is seen in other organs.
Alterations in Angiogenesis with Aging
Angiogenesis is impaired in the old vascular tree in response to ischemia or chemical signals. Explants of arteries from old animals have decreased spouting of microvessels and decreased vascular invasion of implants. As noted above, there is no deficit of SMC proliferation, but endothelial cell proliferation is impaired. Endothelial cellular senescence, increased oxidative stress, reduced endothelial NO production, and reduced responsiveness to angiogenic growth factors contribute to lower angiogenesis in older individuals.
Clinical Implications of the Age-related Changes in Vascular Structure and Function
The aortic root and lumen diameter increase with age, as do vessel length and wall thickness. Because the aorta is fixed proximally and distally, the increase in length results in the tortuous, ectatic, and rightward-shifted aorta seen often on chest X-rays of older persons. Arterial wall stiffness can be assessed noninvasively as pulse wave velocity (PWV, the rate at which pressure travels in the artery wall), augmentation index (AI-central peak pressure/pulse pressure), distensibility, and systolic and pulse (systolic- diastolic) blood pressure. Aging is an important determinant of large artery stiffness with PWV increases twofold from age 20 to 80, independent of blood pressure. The age-related changes in large artery structure and function have been demonstrated in the absence of clinical CVD and CV risk factors highlighting the direct, more causal association between aging and vascular dysfunction.
A stiffer arterial wall allows pressure to reflect from the periphery to the heart while the aortic valve is still open, increasing the load on the heart.
Thus PWV is a physiologically relevant parameter. In younger persons, PWV is relatively low. The reflected wave arrives back to the heart after the aortic valve closure, supporting diastolic pressure and improving coronary perfusion without contributing to LV afterload. With aging and arterial stiffening, PWV increases, and the reflected pressure waves are of greater amplitude and arrive back at the heart before aortic valve closure, resulting in increased LV afterload, LV hypertrophy, diastolic dysfunction, relative coronary hypoperfusion, lower diastolic blood pressure, and increased pulse pressure. Augmentation index, a more direct measure of the additive impact of the reflected pressure waves, increases fourfold from 20 to 80, contributing to the increase in systolic pressure that occurs with age. For men in the Framingham study, systolic BP increased 5 mm Hg per decade until the age of 60; then, the slope shifted to 10 mm Hg per decade. For women, systolic BP started lower but shifted to the higher slope earlier. Over the same age range, diastolic BP increases a little and then decreases. Thus the age-related arterial stiffening results in systolic hypertension.
Older athletes have lower systolic pressures and lower PWV than sedentary older adults, but higher than young people. In fact, PWV correlates inversely with maximum oxygen consumption (VO2max) in healthy people
across ages. VO2max is strongly associated with measures of vessel stiffness, particularly PWV, suggesting a significant contribution to the age- related decline in exercise capacity via several mechanisms, including
increased afterload on the LV and altered peripheral blood flow distribution.
The net result of these changes is reduced compliance and increased impedance, leading to increased systolic blood pressure and little, if any, effect on diastolic blood pressure, such that pulse pressure increases. The aorta and proximal large arteries act as an elastic buffering chamber, storing half the LV stroke volume delivered during systole in young adults and during diastole; the elastic forces of the aortic wall push this volume to the peripheral circulation, thus creating nearly continuous peripheral blood flow. The stiffer large arteries in older adults are less able to smooth out the flow, and thus smaller vessels are exposed to pulsatile flow and pressure. The age- related increases in arterial stiffening likely impact the prevalence and severity of a range of common disorders in older persons, including coronary, cerebrovascular, and peripheral artery disease, systolic
hypertension, stroke, HF, particularly HFpEF, cognitive dysfunction, and renal disease.
Arterial stiffness is associated with frailty through CVD, but abnormal arterial structure and physiology may be independently associated with frailty. Cross-sectional studies show that markers of arterial stiffening are associated with frailty, as measured by both the Fried and Rockwood criteria. Increased PWV was associated with sarcopenia and slow gait speed as well.
As noted above, lifelong athletes have lower arterial stiffness than sedentary controls. Training for a marathon with thrice-weekly long runs resulted in lower arterial stiffness and modest blood pressure changes. This suggests that the aging changes may be reduced with aggressive exercise.
Furthermore, novel therapies such as Alagebrium, a prototypical advanced glycosylation end-product collagen cross-link breaker, have effectively decreased PWV and augmentation index in older primates and people, highlighting this potential role in ameliorating age-related arterial stiffness. Pharmacologic therapies that reduce diastolic blood pressure appear able to arterial stiffness. While this may decrease the passive stretch when arterial stiffness is measured, studies focused on arterial stiffness showed no effect of classic anti-hypertensive drugs on PWV. RAAS antagonists reduce arterial collagen deposition. Renin-angiotensin blockers inhibit the expression of proinflammatory mediators and attenuate adverse vascular remodeling, but definitive studies in normotensive older people are not available.
SUMMARY
Normal aging is accompanied by substantial alterations in the anatomy and physiology of the heart and vasculature. There are declines in most cardiovascular function aspects, including cardiac output and blood flow distribution, and oxygen utilization, which create significantly reduced reserve capacity, which becomes more apparent during exercise and stress.
The age-related alterations in the anatomy and physiology of the heart likely have varying degrees of significance. Some may not have functional significance and are essentially epiphenomena of aging. Others, such as aortic sclerosis, ventricular septal thickening, and attenuated cardiac function response during exercise, may simulate disease. Some findings associated with age and are prevalent in older hearts, such as senile amyloid and
calcified mitral annulus, are likely part of disease processes rather than aging.
Vascular aging also plays a critical role in the aging of the cardiovascular system, increasing the load on the heart and altering the perfusion of the target organs. While aging is a separate process from atherosclerosis, aging increases the risk of the development of atherosclerosis. The primary research focus has been large artery changes, but age-related alterations in the microvasculature may be just as important.
With the currently available information, it is not always possible to distinguish the effects of aging from the effects of the disease, particularly in very old persons. However, it is reasonable to propose that many of the age- related changes discussed may lower the threshold for clinical disease and, thus, predispose to a variety of cardiovascular disorders in older adults, including HF, hypertensive hypertrophic cardiomyopathy, valvular stenosis and regurgitation, systolic hypertension, supraventricular arrhythmias, and conduction disturbances. Awareness of these age-related changes and the principles of aging biology, in general, will help investigators avoid potential errors in research study design or interpretation and help clinicians tailor intelligent treatments to older adults. Since many of these age-related declines in cardiovascular and exercise performance are modifiable and have been shown to be partially preventable and reversible with exercise training, maintaining regularly scheduled physical activity and conditioning is a potentially important strategy to mitigate the potential adverse effects of aging on cardiovascular function.
FURTHER READING
Cieslik KA, Trial J, Entman ML. Defective myofibroblast formation from mesenchymal stem cells in the aging murine heart rescue by activation of the AMPK pathway. Am J Pathol. 2011;179:1792–806.
DeSouza CA, Shapiro LF, Clevenger CM, et al. Regular aerobic exercise prevents and restores age-related declines in endothelium-dependent vasodilation in healthy men. Circulation. 2000;102(12):1351–1357.
Hermeling E, Hoeks AP, Winkens MH, et al. Non-invasive assessment of arterial stiffness should discriminate between systolic and diastolic pressure ranges. Hypertension. 2010;55:124–130.
Jones MR, Ravid K. Vascular smooth muscle polyploidization as a biomarker for aging and its impact on differential gene expression. J Biol Chem. 2004;279(7):5306–5313.
Kawaguchi M, Hay I, Fetics B, Kass DA. Combined ventricular systolic and arterial stiffening in patients with heart failure and preserved ejection fraction: implications for systolic and diastolic reserve limitations.
Circulation. 2003;107:714–720.
Lakatta EG, Levy D. Arterial and cardiac aging: major shareholders in cardiovascular disease enterprises: part II: the aging heart in health: links to heart disease. Circulation. 2003;107:346–354.
Lakatta EG, Sollott SJ. Perspectives on mammalian cardiovascular aging: humans to molecules. Comp Biochem Physiol. 2002;132:699–721.
Lee TM, Su SF, Chou TF, Lee YT, Tsai CH. Loss of preconditioning by attenuated activation of myocardial ATP-sensitive potassium channels in elderly patients undergoing coronary angioplasty. Circulation.
2002;105:334–340.
Leung DY, Boyd A, Ng AA, Chi C, Thomas L. Echocardiographic evaluation of left atrial size and function: current understanding, pathophysiologic correlates, and prognostic implications. Am Heart J. 2008;156:1056– 1064.
Longobardi G, Abete P, Ferrara N, et al. “Warm-up” phenomenon in adult and elderly patients with coronary artery disease: further evidence of the loss of “ischemic preconditioning” in the aging heart. J Gerontol A Biol Sci Med Sci. 2000;55:M124–M129.
Matsushita H, Chang E, Glassford AJ, Cooke JP, Chiu CP, Tsao PS. eNOS activity is reduced in senescent human endothelial cells: preservation by hTERT immortalization. Circ Res. 2001;89(9):793–798.
Moeini M, Lu X, Avti PK, et al. Compromised microvascular oxygen delivery increases brain tissue vulnerability with age. Sci Rep.
2018;8(1):8219.
Nichols WW. Clinical measurement of arterial stiffness obtained from non- invasive pressure waveforms. Am J Hypertens. 2005;18(1 pt 2):S3–S10.
Novelli M, Pocai A, Skalicky M, Viidik A, Bergamini E, Masiello P. Effects of life-long exercise on circulating free fatty acids and muscle triglyceride content in ageing rats. Exp Gerontol. 2004;39(9):1333– 1340.
Olsen H, Vernersson E, Lanne T. Cardiovascular response to acute hypovolemia in relation to age. Implications for orthostasis and hemorrhage. Am J Physiol Heart Circ Physiol. 2000;278:H222–H226.
Pandey A, Kraus W, Brubaker P, Kitzman D. Healthy aging and cardiovascular function: invasive hemodynamics during rest and exercise in 104 healthy volunteers. JACC Heart Fail. 2020;8(2):111–121.
Redfield MM, Rodeheffer RJ, Jacobsen SJ, et al. Plasma brain natriuretic peptide concentration: impact of age and gender. J Am Coll Cardiol. 2002;40:976–982.
Tanaka H, Monahan KD, Seals DR. Age-predicted maximal heart rate revisited. J Am Coll Cardiol. 2001;37:153–156.
Vaitkevicius PV, Lane M, Spurgeon H, et al. A cross-link breaker has sustained effects on arterial and ventricular properties in older rhesus monkeys. Proc Natl Acad Sci USA. 2001;98(3):1171–1175.
Wang M, Takagi G, Asai K, et al. aging increases aortic MMP-2 activity and angiotensin II in nonhuman primates. Hypertension. 2003;41:1308–1316.
Chapter
74
Coronary Heart Disease and Dyslipidemia
Michael G. Nanna, Karen P. Alexander
INTRODUCTION
The spectrum of coronary heart disease (CHD) includes subclinical CHD, asymptomatic or stable ischemic heart disease, and acute coronary syndromes including unstable angina and acute myocardial infarction (MI). Atherosclerosis in the coronary circulation contributes to luminal narrowing and increases risk of vascular dysfunction and thrombosis. Clinical presentations of CHD result from insufficient oxygen supply for the demands of the myocardium. Dyslipidemia is a major risk factor for the development of CHD in individuals up to age 80. There are multiple available therapeutic options to reduce blood cholesterol levels, many of which also modify future risk of cardiovascular events.
EPIDEMIOLOGY
Despite declining mortality over the past three decades, CHD remains the leading killer of both men and women in the United States. More than 80% of deaths from CHD occur in those older than 65 years. In the United States, the prevalence of CHD, MI, and angina all increase with age in both men and women (Figures 74-1 and 74-2). The initial manifestation of CHD may be an acute MI, occurring in about 40% of cases, or sudden death in 10% to 20% of cases. The average age of first MI is 66 years for men and 72 years for women. In-hospital mortality following an MI also rises sharply with age: less than 1% in those younger than 50 years old, ~2.5% in those 60 to 69 years old, ~4% in those 70 to 79 years old, and ~8% among those 80 years or older. One-year mortality similarly increases with age. Furthermore, the
majority of patients with CHD older than 75 years are women because of their longer life expectancy and the 10-year lag in CHD manifestations as compared with men.
FIGURE 74-1. Prevalence of coronary heart disease by age and sex, United States. (Adapted with permission from NHANES, 2013–2016. National Heart, Lung, and Blood Institute. US Department of Health & Human Services.)
FIGURE 74-2. Prevalence of myocardial infarction (MI) by age and sex, United States. (Adapted with permission from NHANES, 2013–2016. National Heart, Lung, and Blood
Institute. US Department of Health & Human Services.)
Clinically evident CHD represents the tip of the iceberg with many older patients having asymptomatic and subclinical coronary disease. The Cardiovascular Health Study examined the prevalence of clinical and subclinical cardiovascular disease (CVD) in a large community-dwelling Medicare population. Using a composite measure of MI on electrocardiogram (ECG) or echocardiography and abnormal carotid artery wall thickness or ankle-brachial blood pressure index, they found that disease prevalence doubled from 22% in women aged 65 to 70 to 43% in those aged 85 or older. Similarly, the frequency of subclinical vascular disease in men increased from 33% to 45% in these age groups, respectively.
Learning Objectives
Understand the prevalence of coronary heart disease (CHD) in older adults.
Recognize the clinical aspects—including symptoms, signs, and diagnostic test results—that are common among older adults with CHD.
Key Clinical Points
CHD is common and has high morbidity and mortality in older adults.
Many older patients have asymptomatic, stable, or subclinical ischemic heart disease.
Total cholesterol, low-density lipoprotein cholesterol (LDL-C), and triglyceride (TG) levels increase from the third to seventh decade of life. Typically, LDL-C remains stable or even declines in older age cohorts.
Dyslipidemia is a well-established risk factor for cardiovascular disease, but strength of this association is diminished with age and limited data exists for those older than age 80.
Understand treatment of CHD, including treatment of dyslipidemias, in older adults.
Typical angina is the most common presenting symptom of CHD
regardless of age.
Delays in recognizing other symptoms such as dyspnea, fatigue, or epigastric discomfort may contribute to later presentations among older adults.
Evaluation for symptoms suggestive of CHD should be similar in older and younger patients. Functional testing is a valuable diagnostic and prognostic tool in older adults. Modified protocols or pharmacologic-based stress tests may be used for those who experience difficulty with standard exercise protocols.
Management of CHD should be similar in older and younger patients prioritizing risk factor modification, symptomatic relief, and goals of care.
Revascularization is an effective method for relief of frequent angina particularly if symptoms remain despite optimally tolerated medical therapy.
PATHOPHYSIOLOGY OF DYSLIPIDEMIA AND CHD
The development of CHD is associated with a variety of well-established risk factors, including the presence of dyslipidemia, hypertension, diabetes mellitus, tobacco use, obesity, chronic renal insufficiency, and genetic risk factors for CHD. Other risk factors, such as early menopause, connective tissue disease, and human immune deficiency virus, have also been linked with higher risk for future cardiovascular events. A complete discussion of these risk factors is beyond the scope of this chapter and is discussed elsewhere in this textbook. This chapter’s focus is on the link between dyslipidemia and CHD.
Dyslipidemia and Age
Elevated total cholesterol and LDL-C increase the risk for atherosclerotic cardiovascular disease (ASCVD) in middle-aged men and women. Multiple cross-sectional studies have demonstrated changes in lipid patterns across
age groups. In general, total cholesterol, LDL-C, and TG levels all increase in both men and women from the third to the seventh or eighth decades of life. Changes in LDL-C are accelerated in women starting at menopause with the reduction in systemic estrogen. Beyond the seventh and eighth decades of life, LDL-C and cholesterol levels plateau and often decline. Lower cholesterol levels in adults 75 years or older may be related to healthy survivorship bias with individuals with lower cholesterol levels more likely to survive to old age. The decline in cholesterol levels observed in older populations may also be related to a variety of other less favorable factors, including malnutrition, multimorbidity, inflammation, and frailty. The data supporting the association between LDL-C and the development of CHD in older adult populations is therefore less clear. For example, in a well- characterized cohort of US adults older than 75 years and free of CVD at baseline, LDL-C was not associated with 5-year CVD risk. In another analysis from the Copenhagen General Population Study, higher LDL-C was associated with future risk of MI among individuals aged 70 to 100.
Comorbidities and frailty also confound the association between cholesterol and mortality. Older persons at both ends of the cholesterol curve, with the lowest and the highest cholesterol levels, may be at higher risk for cardiovascular events and mortality. Ultimately, regardless of the attributable risk associated with hypercholesterolemia in older adult populations, studies are ongoing to identify whether targeting lipids with pharmacologic therapies can improve cardiovascular outcomes in older adults (which is discussed in more detail in the “Evaluation and Management” section).
There are five major subpopulations of lipoproteins that provide additional information on risk. These include chylomicrons, very low-density lipoproteins (VLDLs), intermediate-density lipoproteins (IDLs), low-density lipoproteins (LDLs), and high-density lipoproteins (HDLs). Each differs in composition, metabolic function, and atherogenic potential. Atherogenic lipid particles include apolipoprotein B (ApoB) and lipoprotein(a) (Lp[a]). ApoB is the primary lipoprotein for chylomicrons, VLDLs, IDLs, and LDLs and functions as the ligand for the LDL receptor. Lp(a) is a lipoprotein particle similar to LDL cholesterol (LDL-C) that binds to ApoB. While these subparticles are associated with risk, LDL-C remains the focus of existing therapies. The relationship between aging and changes in the concentration of these subparticles, as well as their association with cardiovascular risk, represents an important area for future investigation.
CHD and Age
Cardiovascular changes are common with age. Arterial stiffening is prevalent and results in isolated systolic hypertension with widened pulse pressures, factors known to increase risk of cardiovascular events. Heart failure with preserved ejection fraction (EF) is a prevalent and similar condition, which elevates end-diastolic pressures and impairs diastolic filling of the coronary circulation. Age-associated endotheliopathy, defined by progressive endothelial dysfunction and blunted responses to protective vasodilatory mediators, results in atherosclerotic plaques with increasing numbers and severity. The composition of these atherosclerotic lesions also changes with age, with reduction in the soft lipid core and an increase in calcification and fibrosis. While more advanced calcified plaques are actually less likely to rupture, the sheer increase in lesion numbers is associated with a higher likelihood for CHD events in older adults.
The pathophysiology of MI involves atherosclerotic plaque rupture, platelet activation/aggregation, endothelial dysfunction, inflammation, and thrombus formation. If a clot completely occludes a coronary artery, the patient suffers an acute MI, and an injury pattern (eg, ST elevation) is often seen on the ECG. In contrast, patients with plaque rupture can also form a nonocclusive thrombus, resulting in subendocardial ischemia. The distinction between unstable angina and MI shifted with the advent of high-sensitivity troponins as they can now identify very low levels of circulating cardiac troponin. Even lowest levels of circulating troponin above the detection threshold are associated with increased risk. In practice, MI type is also often stratified by ECG findings of ST-segment elevation MI (STEMI) or nondiagnostic ECG considered non-ST–segment elevation MI (NSTEMI).
Most recently, the fourth universal definition of MI provides updated definitions for myocardial injury and MI (Figure 74-3).
FIGURE 74-3. Fourth universal definition of myocardial infarction: a model for interpreting myocardial injury. URL, upper reference limit.
PRESENTATION
CVD may be diagnosed following an acute cardiovascular presentation, identification of ischemia on noninvasive testing, or finding obstructive coronary artery disease (CAD) on coronary imaging. Some examples of presentation types include evidence of prior silent MI on ECG, stable angina without ischemia on noninvasive testing, or asymptomatic ischemia found on noninvasive testing. The presence of obstructive CAD and angina may also vary across these presentations. Coronary calcifications, frequently noted on nongated chest imaging, identify the presence of atherosclerosis, and should trigger evaluation and management of risk factors and/or symptoms. An MI requires the presence of acute myocardial injury detected by abnormal cardiac biomarkers in conjunction with clinical evidence of acute myocardial ischemia. Abnormal cardiac biomarkers are generally defined as an elevated cardiac troponin value above the 99th percentile upper reference limit (URL). Signs of myocardial ischemia include clinical symptoms, ischemic ECG changes, pathological Q waves on ECG, imaging evidence of ischemia, or identification of coronary thrombus on angiography or autopsy. When patients present with a cardiac troponin level greater than or equal to the 99th percentile of the URL with a characteristic rise and fall and signs/symptoms of acute ischemia, they meet criteria for an acute MI. If the MI is due to atherosclerosis and thrombosis (with either complete or partial occlusion of a coronary artery), patients meet criteria for a type 1 MI. This is
often triggered by plaque rupture or erosion. If, however, the findings are related to an oxygen supply and demand imbalance, such as due to fixed coronary atherosclerosis, coronary spasm, coronary embolism, coronary dissection, severe anemia, severe hypotension/hypertension or tachyarrhythmia, then the patient meets criteria for type 2 MI. When the patient has a characteristic rise and fall in troponin but without clinical signs of acute ischemia, they meet criteria for acute myocardial injury, which may be due to conditions such as acute heart failure or myocarditis. Finally, if troponin levels are stable without a characteristic rise and fall, this represents chronic myocardial injury. This can be seen with left ventricular hypertrophy, structural heart disease, or chronic kidney disease.
The prevalence of angina increases with age (Figure 74-4), but it is also the case that older individuals often have anginal equivalent symptoms such as dyspnea, epigastric pain, fatigue, confusion, or malaise that may be misinterpreted as consequences of aging or comorbid illness. Findings from the Global Registry of Acute Coronary Events, a large, prospective, multinational registry of ACSs, demonstrated that patients presenting with anginal-equivalent symptoms were less likely to receive appropriate cardiac medications, undergo cardiac catheterization, and were at higher risk of in- hospital morbidity and mortality. Older people can also have an impaired ischemia warning system. In a series of patients with CHD undergoing treadmill testing, researchers found that patients older than age 70 took more than twice as long as their younger counterparts to report angina after ECG- documented ischemia was noted.
FIGURE 74-4. Prevalence of angina pectoris by age and sex, United States. (Adapted with permission from NHANES, 2013–2016. National Heart, Lung, and Blood Institute. US Department of Health & Human Services.)
Difficulty in recognizing symptoms contributes to later presentation of acute events in older patients. More than two-thirds of patients with MI, older than age 65, fail to reach an emergency department within 6 hours after the onset of their symptoms. The Rapid Early Action for Coronary Treatment study quantified delay time as an additional 14 minutes for every 10-year increment in age, beginning with the age of 30. While time to first medical contact has improved as community and state efforts have targeted improving MI systems of care, delays in MI presentation still have strong prognostic implications. Prehospital delays may result from atypical presentation, medical comorbidities, previous experiences within the health care system, socioeconomics, access to care, and cognitive and functional impairments.
Thus, clinicians should advise that cardiac symptoms can vary and patients should seek rapid medical attention if concerning symptoms occur.
EVALUATION
Evaluation of Stable CHD
Given the prevalence of CHD in older patients, clinicians must have a high index of suspicion to make the diagnosis. In taking a history, clinicians must
consider risk factors as well as temporal course of symptoms suggestive of CHD. Patients with new, progressive, or refractory symptoms typically require an expedited—and possibly inpatient—evaluation.
A systematic approach to the physical examination may provide further clues to the presence of CHD. Some older patients develop calcific vascular disease, and pseudohypertension may be observed. Diminution of the femoral pulses or brachial-femoral delay may suggest the presence of atherosclerotic aorto-iliac disease, and these findings may accompany observed dermatologic changes with lower extremity hair loss. Performing ankle- brachial indices remains a useful and sensitive screening tool for identifying patients with peripheral vascular disease, a known risk factor for increased cardiovascular events. The cardiac examination may include signs of left- or right-sided heart failure (pulmonary edema, displaced point of maximal impulse, an S3, or peripheral edema) or characteristic murmurs of valvular
heart disease. Once a thorough history and physical has been completed, further diagnostic evaluation should be based on the patient’s symptoms as outlined below. Risk factors, particularly blood pressure, should be measured. Obtaining a baseline ECG is also reasonable because of the high prevalence of silent MIs in older individuals. Beyond the standard history, physical examination, and laboratory tests, further diagnostic testing (carotid ultrasound, treadmill testing, echocardiography, or computed tomography) in the asymptomatic older patient to detect occult or subclinical CHD remains controversial and is not generally recommended.
Symptomatic older patients should undergo a similar assessment for obstructive coronary disease as younger patients based on algorithms that take into account symptom characteristics, including angina type (non- anginal, anginal-equivalent, or typical), its course (stable, progressive, or unstable), and its duration. This initial assessment of a patient’s pretest probability of disease should guide diagnostic testing. In particular, clinicians need to be cognizant that, according to Bayesian theory, the predictive value of a test is influenced by the disease prevalence in the population tested. For example, clinicians may interpret a negative stress test in a high-risk older woman (pretest probability of disease 80%) as “ruling out” the presence of CHD, whereas this patient’s posttest likelihood remains more than 60% (Table 74-1). For these reasons, older patients with high pretest likelihood for coronary disease should be considered for direct referral for cardiac catheterization (if revascularization is an appropriate
option). At the other extreme, patients with a low pretest probability for CAD less than or equal to 20% (ie, no risk factors, normal ECG, and very atypical symptoms) can often be followed clinically and/or be assessed for other etiologies of their symptoms (gastrointestinal, pulmonary, musculoskeletal, etc). Older patients with an intermediate pretest probability for CHD (between 20% and 70%) are those in whom stress testing has its greatest impact on clinical decision-making.
TABLE 74-1 ■ INFLUENCE OF AGE ON PREDICTIVE VALUE OF STRESS TESTING (BAYES THEOREM)
When stress testing is indicated, guidelines recommend exercise ECG as a first strategy for patients with a normal baseline ECG. The exercise ECG provides important prognostic information (including exercise duration and hemodynamic response), as well as electrocardiographic indications of ischemia (ST depression). Older patients, however, frequently experience difficulty with exercise testing because of deconditioning or disability and may need modified protocols starting at lower levels with slower stage progression. Alternatively, for patients who cannot exercise, a pharmacologic-based stress test (dobutamine, adenosine, or dipyridamole) can be performed. In older patients with baseline ECGs abnormalities (resting ST depression, left bundle branch block, left ventricular hypertrophy with strain, or paced rhythms), imaging modalities, such as nuclear perfusion or stress echocardiography, are required. While these modalities significantly add to the cost of the test, these improve the diagnostic accuracy beyond stress ECG alone and provide information as to the location and extent of coronary disease. Thus, the choice of diagnostic test should consider the clinical setting as well as local availability and expertise
(Figure 74-5). In the PROMISE trial, functional testing was able to distinguish future risk of CV death/MI in individuals 65 years and older, whereas a positive result, defined as stenosis ≥ 70% or ≥ 50% left main stenosis, on anatomic testing (CCTA) did not correlate with future outcomes in older patients. There is likely a role for CCTA to exclude surgical disease in asymptomatic nonfrail older adults with low EF. There is no need to repeat stress tests without a change in symptoms. Guidelines advise against repeat testing within 2 years of percutaneous coronary intervention (PCI) and 5 years of coronary artery bypass graft (CABG).
FIGURE 74-5. Work-up and management of suspected ischemic heart disease. CMR, cardiac magnetic resonance; IHD, ischemic heart disease; MPI, myocardial perfusion imaging; UA, unstable angina. (Reproduced with permission from Fihn SD, Gardin JM, Abrams J, et al. 2012 ACCF/AHA/ACP/AATS/PCNA/SCAI/STS guideline for the diagnosis and management of patients with stable ischemic heart disease: a report of the American College of Cardiology Foundation/American Heart Association task force on practice guidelines, and the American College of Physicians, American Association for Thoracic Surgery, Preventive Cardiovascular Nurses Association, Society for Cardiovascular Angiography and Interventions, and Society of Thoracic Surgeons. Circulation. 2012;126[25]:e354–e471.)
Cardiac catheterization is as safe in contemporary practice in older patients as in younger patients, but it should always be performed based on a favorable risk-benefit ratio and in alignment with the patient’s goals of care. Vascular injury, bleeding, MI, stroke, and even mortality can result, albeit rarely, and advanced age increases these risks. However, the risk to life remains less than 0.2%, and the risk of other serious adverse events is less than 0.5%, even in those aged 75 or older. Cardiac catheterization should be considered for those at high risk of severe coronary disease, or refractory ischemic symptoms despite maximally tolerated medical treatment. At the same time, it is safe to defer cardiac catheterization for initial medical management in patients with stable ischemic heart disease. This is a helpful clarification particularly for older patients with multimorbidity and in those with even moderate to severe ischemia on stress testing as demonstrated in the International Study of Comparative Health Effectiveness with Medical and Invasive Approaches (ISCHEMIA) trial. In this study, there was no difference in the composite cardiovascular outcome or all-cause mortality with a conservative approach optimizing medical therapy compared to an initial invasive approach in patients with stable ischemic heart disease and EF greater than or equal to 35%. Revascularization does continue to provide important angina relief when frequent symptoms persist despite maximally tolerated medical therapy. It also improves survival for those with multivessel disease and EF less than 35%. In summary, if symptoms can be managed with medical therapy in stable patients, then a PCI is unlikely to add benefit and has not been shown to improve mortality. On the other hand, if patients are unable to tolerate antianginal therapies due to side effects, concerns around polypharmacy, or patient preferences, or if PCI can provide more effective symptom control and/or a more durable symptom relief, revascularization may be preferable. Given these complexities, person- centered decision-making around the management of CHD is of the utmost importance.
MI Evaluation
Older patients with MI often have an acceleration of chest pain symptoms and may have more subtle changes on the ECG (eg, flipped T waves or ST depression) or more dramatic changes (eg, ST elevation). There is some evidence that plasma levels of procoagulation markers and coagulation factors are elevated in older patients, but it remains unclear whether these
findings alone are responsible for increased thrombotic tendencies in older patients or alter risk when accompanied by other traditional risk factors for thrombotic events. There are also significant proportions of older patients who develop MI secondary to exacerbations of chronic comorbid conditions or acute medical illnesses. These type 2 MIs occur in the setting of sepsis, acute blood loss or chronic anemia, pneumonia, pulmonary embolism, chronic obstructive pulmonary disease, congestive heart failure, dysrhythmias, or hypertensive urgencies. A retrospective study demonstrated that approximately 30% of patients present with an acute noncardiac condition concomitant with an MI, contributing to increased mortality and less use of cardiac medications and interventions. These secondary events usually occur in the context of increased myocardial oxygen demand or hemodynamic stress in patients with underlying CAD and represent a substantial number of cases. The distinction between a type 1, or spontaneous, MI and a type 2, or secondary, MI is helpful to determine best approach to management. In the latter, focus on supply-demand and risk stratification is warranted, where as in the spontaneous MI group, a more typical approach with anticoagulation and cardiac catheterization is warranted.
MEDICAL MANAGEMENT
Antiplatelet Therapy
There is strong support for the benefit of aspirin in secondary prevention, and in select patients for primary prevention in the presence of risk factors. The Antithrombotic Trialists’ Collaboration, which performed a meta-analysis of aspirin trials and included more than 135,000 patients, identified a 25% risk reduction in cardiovascular events. In the Physician’s Health Study of 44,000 men without known CHD, those randomized to aspirin had a 44% lower risk for subsequent MI versus those taking placebo. Observational data from the Nurse’s Health Study suggest similar benefits of aspirin for primary CHD prevention in women. The reduction in nonfatal cardiovascular events and stroke was greater for secondary prevention than for primary prevention, particularly when at low risk. Recent randomized controlled trial data have cast more doubt on the benefit of aspirin for primary prevention. The ARRIVE (Aspirin to Reduce Risks of Initial Vascular Events) trial randomized nondiabetic individuals at moderate risk for CVD (men ≥ 55
years old, women ≥ 60 years old) to aspirin 100 mg/day versus placebo. At 5 years of follow-up, the composite CV outcome was the same in both groups with more gastrointestinal bleeding in the aspirin group. A limitation of ARRIVE was that event rates were generally lower than expected due to enrollment of a lower risk cohort than intended. The ASCEND (A Study of Cardiovascular Events in Diabetes) trial examined the effect of aspirin 100 mg versus placebo for primary prevention of cardiovascular events in patients with diabetes. ASCEND demonstrated that low-dose aspirin reduced cardiovascular events over a mean follow-up of 7 years in diabetic individuals (rate ratio 0.88; 95% confidence interval, 0.79–0.97, p = 0.01), while increasing major bleeding events compared with placebo (4.1% vs 3.2%; p = 0.003). The ASPREE (Aspiring in Reducing Events in the Elderly) trial evaluated the effect of aspirin versus placebo on disability-free survival, cardiovascular events, mortality, and bleeding in healthy adults older than 70 years (or ≥ 65 years among Blacks and Hispanics in the United States). Over 5 years, aspirin did not prolong disability-free survival but led to a higher rate of major hemorrhage compared with placebo. ASPREE also demonstrated a higher mortality among individuals receiving daily aspirin, attributed to cancer-related death. As a reflection of these data, the US Preventive Services Task Force recommends weighing the impact of aspirin therapy on primary vascular events versus bleeding when considering initiation of treatment within the broader context of health trajectory and individual patient priorities. Aspirin dose continues to be debated, but efficacy of aspirin does not appear to increase at doses greater than 150 mg/day, and higher doses increase the risk for bleeding. Thus, 81 mg of aspirin is the dose with the best evidence for secondary prevention.
Clopidogrel, a thienopyridine that inhibits ADP-dependent platelet aggregation, when added to aspirin in the setting on NSTEMI results in a 20% relative risk reduction in cardiovascular death, MI, or stroke as shown in the Clopidogrel in Unstable angina to prevent Recurrent Events (CURE) trial. The study found similar benefits in older patients; nevertheless, registry data suggest that the in-hospital use of clopidogrel in older patients after MI remains low. The use of clopidogrel is also recommended as an alternative to aspirin in the small subset of patients who are allergic or intolerant to aspirin. Newer ADP-dependent platelet aggregation inhibitors are used in patients following percutaneous interventions. Prasugrel has a black box warning against use in those age 75 or older or those with prior stroke or
low body mass index due to increased risk of bleeding. Ticagrelor, with a different risk profile, seems safe for use in those age 75 or older based on current evidence. Patients who require long-term dual antiplatelet therapy (DAPT) or oral anticoagulation with warfarin are advised to take a reduced aspirin dose of 81 mg daily, and all older patients should be on reduced aspirin dose, regardless of other agents.
Antithrombotic Therapy
Consideration for the likelihood of a thrombotic process based on the type of MI and need for invasive management strategy should guide the approach to anticoagulation. Antithrombotic therapy reduces cardiovascular events in patients after an ACS, yet registry data show less use of antithrombotic therapy in older patients when compared to their younger counterparts.
Unfractionated heparin, in conjunction with antiplatelet therapy, is associated with significant reduction in death or MI in patients with ACS. Older patients are more often susceptible to overdosing, reflected by an elevated partial thromboplastin time, and bleeding. Low-molecular-weight heparin also improves clinical outcomes for ACS with a greater relative benefit in older patients than younger patients, but caution is needed to avoid excessive dosing and bleeding complications due to its renal clearance. Bivalirudin, a direct thrombin inhibitor, is used frequently in invasively managed ACS patients, often in conjunction with oral antiplatelet loading. It has equivalent antithrombotic activity with less bleeding in several trials.
Newer oral antithrombotic therapies have been evaluated for safety and efficacy in reducing recurrent events in patients with ASCVD. The Apixaban for Prevention of Acute Ischemic Events 2 (APPRAISE-2) trial randomized high-risk patients following MI to apixaban 5 mg twice daily in addition to antiplatelet therapy. They found an increased risk of major bleeding without a significant reduction in recurrent ischemic events. The Cardiovascular Outcomes for People Using Anticoagulation Strategies (COMPASS) trial randomized a different population—patients with stable ASCVD—to a low dose of rivaroxaban (2.5 mg twice daily) plus aspirin versus aspirin alone.
There was a reduction in cardiovascular events, but at the expense of more bleeding with rivaroxaban. Given the increased bleeding risk, oral anticoagulants are not recommended for acute or chronic CHD in the absence of another indication (eg, atrial fibrillation or deep venous thrombosis).
β-Blockers
β-Blockers lower myocardial oxygen demand and improve coronary blood flow with anti-hypertensive and anti-ischemic properties. Long-term benefits from β-blockers include management of ischemic symptoms, lowering blood pressure, or improving HF outcomes in those a depressed left ventricular function. Many patients, including older adults, are placed on β-blockers initially at the time of an acute MI. A meta-analysis of 25 randomized controlled trials of patients with prior MI showed β-blockers reduced all- cause mortality or MI by 25%. An observational analysis of older patients following acute MI showed those receiving β-blockers had a 33% reduction in 1-year mortality. The contemporary REACH registry analyzed use of β- blockers into three cohorts: CAD without MI, CAD with prior MI, and CAD risk factors only. There was no association between use of β-blockers and lower rates of death, nonfatal MI, or nonfatal stroke in any cohort. However, those with recent MI (≤ 1 year) had a 25% lower incidence of the composite which included cardiac rehospitalization with β-blocker use. This suggests the greatest benefit in the contemporary era for β-blockers is in the first year following an MI. Older patients are often vulnerable to drugs with hypotensive actions and have altered responses to β-blockers owing to conduction system deterioration and the physiologic desensitization of β- adrenergic receptor function, so this information is helpful in considering continuation after 1 year. Additionally, a recent trial found that early use of β- blockers in patients with MI could worsen risks for congestive heart failure and result in poorer outcomes. Thus, β-blockers should be administered to those with an identified potential for benefit, titrated with caution, and revisited based on tolerability and clinical stability over time.
Statins
Cholesterol is a key determinant of risk, reflected by levels of LDL-C and non–HDL-C. There are several classes of drugs for lowering serum cholesterol, including fibrates, bile acid sequestrants, niacin, fish oil, ezetimibe, hydroxymethylglutaryl coenzyme A (HMG-CoA) reductase inhibitors (statins), PCSK9-inhibitors, and bempedoic acid. Of these, only statins—alone or in combination with ezetimibe or PCSK9 inhibitors—have proven effective for secondary prevention of cardiovascular events.
There is no question that for secondary prevention of ASCVD, moderate- intensity statin use reduces major vascular events including in those aged 75
or older. Furthermore, the guideline states that it is reasonable to continue high-intensity statin in patients aged 75 or older if tolerated. Notably, an observational study from the Veterans Affairs health system identified a graded association between statin intensity and mortality in patients with ASCVD, with high-intensity statins conferring a small but significant survival advantage compared with moderate intensity statins in older adults (76–84 years old). Another large meta-analysis from the Cholesterol Treatment Trialists found no heterogeneity of treatment effect when high-intensity statin therapy was compared with moderate-intensity statin therapy across age groups.
In the immediate post-MI period, high-intensity lipid-lowering therapy has demonstrated benefit in older patients to prevent recurrent cardiovascular events. In fact, in a post hoc analysis of the Pravastatin or Atorvastatin Evaluation and Infection Therapy—Thrombolysis in Myocardial Infarction (PROVE IT-TIMI 22) trial, patients age 70 years and older were found to derive greater benefit than younger counterparts in terms of absolute and relative reduction in cardiovascular events. A meta-analysis of age- specific outcome data from two primary prevention statin trials, JUPITER (Justification for Use of Statins in Prevention: An Intervention Trial Evaluating Rosuvastatin) and HOPE-3 (Heart Outcomes Prevention Evaluation), demonstrated a 26% relative risk reduction for those older than 70 years for the end point of nonfatal MI, nonfatal stroke, or cardiovascular death (HR, 0.74; 95% CI, 0.61–0.91; p = 0.0048). There was no heterogeneity of treatment effect by age observed but all included patients also had elevated C-reactive protein levels and most of the events were in those between age 70 and 75.
There is a paucity of RCT data supporting statin use for primary prevention in older adults (≥ 75 years old) as reflected in the guideline recommendation for clinical assessment of risk when deciding whether to continue or initiate statin treatment (Class IIa). Further compounding this uncertainty is the fact that the guideline emphasizes pretreatment risk stratification using the Pooled Cohort Equations 10-year ASCVD risk calculator to guide treatment decisions. However, the risk calculator was derived in populations only up to age 79 and multiple studies have demonstrated suboptimal performance of the risk calculator in older adult populations. The guideline states that a moderate-intensity statin in adults 75 years or older with an LDL-C level of 70 to 189 mg/dL may be reasonable
(IIb), but balances that by stating it may be reasonable to stop statin therapy when functional decline (physical or cognitive), multimorbidity, frailty, or reduced life-expectancy limits the potential benefits of statin therapy (IIb).
A meta-analysis of data from patients included in randomized trials comparing statins to placebo (n = 134,537) demonstrated no statistically significant benefit in individuals older than 75 years for statin use in primary prevention (Figure 74-6). Observational evidence for primary prevention in US Veterans age 75 or older suggests new initiation of statin reduces all- cause and cardiovascular mortality. An observational subgroup analysis of healthy individuals age 70 or older in the Aspirin in Reducing Events in the Elderly (ASPREE) trial showed statin use at baseline was not associated with disability-free survival, all-cause mortality or dementia. However, those on statins at baseline had a lower risk of physical disability and adverse cardiovascular events. The question “Is statin therapy efficacious and safe in older patients (> 75 years of age)? If so, what is a net benefit of statin therapy in this age group?” was identified as an important question needing to be addressed by future RCTs in the most recent ACC/AHA cholesterol guideline. Two large, currently ongoing trials (A Clinical Trial of STAtin Therapy for Reducing Events in the Elderly, STAREE, ClinicalTrials.gov: NCT02099123 and Pragmatic Evaluation of Events and Benefits of Lipid-Lowering in Older Adults, PREVENTABLE, ClinicalTrials.gov: NCT04262206) are focused on this question.
FIGURE 74-6. Forest plot of effect of primary prevention statin treatment on major vascular events stratified by age. (Adapted with permission from Cholesterol Treatment Trialists’
Collaboration. Efficacy and safety of statin therapy in older people: a meta-analysis of individual participant data from 28 randomised controlled trials. Lancet. 2019;393[10170]:407–415.)
Older adults (≥ 75 years old) with ASCVD received high-intensity statins less often than younger patients in the Patient and Provider Assessment of Lipid Management (PALM) registry, highlighting a potential gap in care. While closing treatment gaps is important, identifying older populations where benefit is unlikely is important as well. A recent meta- analysis of randomized clinical trials of primary prevention in adults aged 50 to 75 found that the time to benefit for 100 adults treated with statin therapy to prevent one MACE was at least 2.5 years. An evaluation of US Medicare- and Medicaid-certified nursing home facilities demonstrated that more than one-third of nursing home residents aged 65 or older with a life-limiting illness remained on statin therapy. Time to benefit is an important consideration for initiating or discontinuing statin treatment in older individuals for primary or secondary prevention.
Statin intolerance is often a concern in older patients, however evidence on this is reassuring. The Effect of Statins on Skeletal Muscle Function and Performance (STOMP) study demonstrated that high-dose atorvastatin did not decrease muscle strength or exercise performance in healthy subjects, despite a mild increase in myalgias with statin treatment (9.4% vs 4.6%, p = 0.05). The Self-assessment Method for Statin Side Effects or Nocebo (SAMSON) trial was a recent double-blind n-of-1 trial of patients who had recently discontinued statins due to side effects. In this trial, 90% of symptom burden with statin treatment was elicited by placebo alone when compared to the statin months and no-tablet months—so many muscle symptoms attributed to statins may be due to the “nocebo” effect. Reassuringly, half of the SAMSON trial participants were able to restart a statin after trial completion. Ofori-Asenso and colleagues also looked at switching, discontinuing, and reinitiating statins among adults aged 65 or older in a random sample of the Australian population. They also found that while statin discontinuation is common, most older individuals eventually restart a statin with improved persistent use. Importantly, older patients in the PALM registry reported tolerating statin therapy similarly to younger subjects.
Taken together this evidence suggests older adults can tolerate statin therapy as well as younger populations. Ultimately, improving persistence and compliance with statin therapy in all age groups is a key priority.
Other Lipid-Lowering Agents
Ezetimibe, the first nonstatin lipid-lowering therapy demonstrated in a randomized controlled trial to improve cardiovascular outcomes in patients with ASCVD, targets the absorption of cholesterol from the diet. In the Vytorin Efficacy International Trial (IMPROVE-IT), 18,144 high-risk patients with an acute coronary syndrome in the preceding 10 days were randomized to simvastatin versus simvastatin plus ezetimibe. The addition of ezetimibe to statin treatment lowered LDL-C by 24%. There was also a modest 2% absolute risk reduction in the composite primary endpoint at 7 years of follow-up (cardiovascular death, MI, hospitalization for unstable angina, coronary revascularization, and nonfatal stroke). A secondary analysis of IMPROVE-IT set out to assess the effect of ezetimibe and simvastatin compared with simvastatin monotherapy among patients 75 years or older with recent acute coronary syndrome included in the trial. The authors determined that older adults in IMPROVE-IT actually derived the most benefit from the addition of ezetimibe to statin therapy, with the greatest absolute risk reduction observed in those 75 years or older without any increase in adverse safety events. There are also data for primary prevention treatment with ezetimibe. The Ezetimibe Lipid-Lowering Trial on Prevention of Atherosclerotic Cardiovascular Disease in 75 or Older (EWTOPIA 75), a multicenter, prospective, randomized, open-label, blinded trial in Japan, examined the preventive efficacy of ezetimibe for patients aged 75 or older, with elevated LDL-C without history of CAD. In EWTOPIA, ezetimibe treatment was associated with a lower rate of cardiovascular events, though the open-label nature of the trial and early termination somewhat limit interpretation of the results.
Monoclonal antibodies against the proprotein convertase subtilisin kexin 9 (PCSK9) are the latest addition to the lipid-lowering clinical arsenal. In recent years, two landmark trials have been published demonstrating the safety and efficacy of alirocumab and evolocumab, respectively, to lower LDL-C levels by up to 50% and improve clinical outcomes in individuals with ASCVD. These medications are recommended by the current ACC/AHA cholesterol guideline recommendations for patients with ASCVD deemed to be high-risk and with persistently elevated LDL-C levels.
However, the impact of PCSK-9 inhibitors in high-risk multimorbid older adult populations has not been described. A prespecified secondary analysis of the ODYSSEY OUTCOMES trial demonstrated that the addition of
PCSK9-inhibitor alirocumab reduced ischemic cardiovascular events in post-ACS patients on maximally tolerated statin intensive therapy across age groups, with increasing absolute benefit with advancing age and no significant safety concerns.
Other Secondary Prevention Strategies
Secondary prevention aims to lower the risk of recurrent cardiovascular events in patients with CHD. Since older patients with CHD face higher overall risk, the benefit of prevention in absolute terms rises with age and “number needed to treat” falls. Secondary prevention strategies target control of risk factors, such as hypertension and tobacco cessation. Exercise and lifestyle interventions should be similarly applied and are effective regardless of age. There is no upper age limit for the benefit of exercise, even if physical limitations modify the type of activity. Cardiac rehabilitation can be especially important following a cardiac event in assisting the older patient in selecting a sustainable exercise routine.
The renin-angiotensin-aldosterone system is a key determinant in hypertension, inflammation, atherosclerosis, and, ultimately, increased cardiovascular events. Angiotensin-converting enzyme (ACE) inhibitors have strong support for safe and effective treatment of hypertension, and improving survival post-MI with depressed heart function, heart failure, or anterior MIs. The Heart Outcomes Prevention Evaluation (HOPE) study extended the benefits of ACE inhibitors to all patients with known CHD or at high risk of CHD. In HOPE, patients with CHD and other patients with high CHD risk (eg, diabetes plus one or more cardiac risk factor) were randomized to 10 mg of ramipril daily versus placebo. After 5 years, treated patients had 26% lower risk of CHD death than those who were not treated. Rates of MI, congestive heart failure, stroke, renal dysfunction, and even development of diabetes were lower in the ACE-treated patient group. The treatment effects of ACE inhibition were greater in those patients aged 65 or older than in younger patients. Current guidelines suggest consideration of ACE inhibitors in all patients having CHD with depressed ventricular function, diabetes, or hypertension. Some experts have suggested that these drugs be considered in all patients with known CHD regardless of other risk factors or left ventricular dysfunction, yet there remains conflicting evidence from clinical trials regarding this issue. Angiotensin receptor blockers (ARBs), similar to ACE inhibitors, are designed to produce antihypertensive
and anti-inflammatory effects within the cardiovascular system, and recent studies demonstrate that these agents translate to decreased cardiovascular events. Several randomized control trials have shown that the ARBs can slow the progression of nephropathy in patients with diabetes and microalbuminuria in a fashion similar to ACE inhibitors. Overall, the data favor the use of established ACE inhibitors for primary and secondary prevention of cardiovascular events, with the consideration of ARB substitution in patients who are intolerant of ACE inhibitors (most commonly from troublesome cough). When used in older patients, one should monitor serum electrolytes and creatinine, as these drugs can cause decreased renal function and hyperkalemia.
CATHETERIZATION AND REVASCULARIZATION
Unstable Angina or Myocardial Infarction
Care guidelines recommend that all those diagnosed with an MI should have an assessment of both left ventricular function (via echocardiography or other means) and coronary disease severity. In patients with NSTEMI, the two options for assessment of post-MI risk are (1) routine angiography with revascularization as appropriate or (2) conservative strategy of medical therapy with selection for angiography based on refractory symptoms of ischemia (“ischemia-driven” approach). Recent clinical trials suggest that the early invasive approach might be preferable for patients at increased risk of recurrent cardiovascular events. With the introduction of contemporary trials, the Treat Angina with Aggrastat and Determine Cost of Therapy with an Invasive or Conservative Strategy—Thrombolysis in Myocardial Ischemia/Infarction (TACTICS-TIMI)-18 trial randomized patients with unstable angina/NSTEMI to one or the other of these strategies and found that those in the early invasive arm (within 48 hours) had nearly 20% lower rates of death, nonfatal MI, or rehospitalization at 6 months than conservatively treated patients. Interestingly, the successful enrollment of older patients (40% of patients were ≥ 65 years) allowed the identification of a 44% relative risk reduction in 30-day death or nonfatal MI among patients 65 years or older (invasive 5.7% vs conservative 9.8%; p < 0.05) and a 56% relative risk reduction in patients older than 75 years(invasive 10.8% vs conservative 21.6%; p < 0.05) with the early invasive strategy, findings consistent with a greater benefit in older relative to younger patients.
Multiple studies in older adult populations have demonstrated similar benefits to an early invasive strategy. In an open-label randomized controlled trial of patients 80 years or older with NSTEMI or unstable angina admitted to hospitals in Norway, an invasive strategy with early coronary angiography and immediate assessment for revascularization was superior to a conservative strategy of medical treatment alone for reducing cardiovascular events. In the Italian Elderly ACS Trial, a routine invasive strategy in NSTEMI patients 80 years or older was beneficial compared with a selective invasive strategy, though the trial did not meet its recruitment goal. One study of NSTEMI patients 80 years or older with chronic kidney disease found PCI offered a survival benefit regardless of eGFR but a higher risk of
bleeding with eGFR less than 30 mL/min per 1.73 m2. Similarly, a meta- analysis assessing the long-term outcome of a routine versus selective invasive strategy in patients with non–ST-segment elevation acute coronary syndromes demonstrated that increasing age is actually the strongest predictor for better outcomes with a routine invasive strategy. Taken together, a routine invasive strategy appears to be the appropriate approach in most older adults with NSTEMI.
Stable Ischemic Heart Disease
A major challenge in the care of older patients with stable CHD is related to who should undergo evaluation for coronary revascularization. In younger patients, randomized clinical trials have simplified this decision-making process by identifying subgroups in which PCI or CABG surgery improves survival and/or quality of life beyond medical therapy. However, patients older than 75 years were generally not represented in these pivotal trials, so clinicians and patients must rely on a careful comparison between the acute procedural risks and potential long-term benefits. Developments in the techniques of coronary catheterization, PCI, and CABG are changing the landscape for patient selection and outcomes for revascularization. Based on Medicare data, trends and outcomes in older patients after PCI were compared between the balloon angioplasty era (1991–1995), the bare metal stents (BMS) era (1998–2003), and the drug-eluting stents (DES) era (2004– 2006). Despite a significant increase in comorbidity, the number of post-PCI adverse cardiovascular events decreased over time, including less death and MI at 3-year follow-up. The improved outcomes were due to reductions in the need for repeat target vessel/lesion revascularizations and CABG,
highlighting the efficacy of evolving technologies and techniques, as well as improving adjunctive therapy.
The use of fractional flow reserve (FFR < 0.8) to identify lesions contributing to ischemia has been shown to improve associated PCI outcomes. The use of third-generation stents has also improved the outcomes of PCI among those with multivessel disease, although CABG continues to demonstrate superior survival and fewer events at 5 years compared with PCI. The selection of patients for CABG as the optimal revascularization strategy should include those with reduced EF (< 35% with viable myocardium), left main CAD or its equivalent, and diabetes.
Age-associated risk in procedural mortality is not strictly linear but rises rapidly beyond the age of 75. Additionally, at any age, patients with CABG face two- to threefold higher mortality risks compared to those undergoing angioplasty. However, technological advances have led to improved procedural success rates and lower risks for both procedures. Thus, despite the fact that procedures were performed on patients with higher risk, the risk of death after CABG in patients aged 65 or older in the Society for Thoracic Surgery database declined nearly 20% between 1990 and 1999 and now rests at just above 4%. The Society for Thoracic Surgery has devised risk models that can be used to guide the impact of patient risk factors on operative morbidity and mortality and can be used in patient management.
The web-based risk calculator can be found at http://www.sts.org.
Nonfatal procedural complications (stroke, MI, and renal failure) also rise with age and are higher with CABG. Of major importance to many older patients are the procedural risks of stroke and loss in mental acuity. Patients with CABG aged 75 or older have a 3% to 6% incidence of stroke compared with less than 1% incidence of stroke with angioplasty. Additionally, by using highly sensitive neurocognitive testing, Newman and colleagues found that up to 50% of patients of all ages undergoing CABG had measurable impairments in neurocognitive function at hospital discharge. Although half of patients with initial impairment recovered by 6 months, cognitive deficits reappeared in many of them during long-term follow-up and portended an impaired functional status. However, one recent study that compared cognitive ability after CABG to angioplasty and age-matched controls noted no meaningful clinical deterioration of cognitive performance between groups. Similarly, the initial enthusiasm of improving neurocognitive outcomes by performing off-pump CABG versus traditional on-pump CABG
has been tempered by findings from a recent randomized trial, which demonstrated comparable cognitive outcomes between the two groups, although long-term follow-up has yet to occur. While chronologic age is a major risk factor for procedural complications or mortality, it is biological age which is most important to consider. For example, by using published risk models, an octogenarian’s likelihood for mortality with CABG ranges from 2% for a “healthy” patient without comorbidities, to less than 30% with multiple risk factors such as diabetes or preexisting CVD.
The risks of revascularization must be balanced against the potential benefits in terms of prolonged survival, improved functional outcomes, or both. The Alberta Provincial Project for Outcomes Assessment in Coronary Heart Disease registry examined the care and outcomes of more than 6000 patients aged 70 to 79 who underwent cardiac catheterization. Compared with medical therapy, those receiving CABG or PCI had significantly higher adjusted 4-year survival rates (CABG 87%, PCI 84%, medical therapy 79%, p < 0.001). These survival benefits of revascularization also held for octogenarians and increased in all aged patients in proportion with the number of diseased vessels and the degree of left ventricular dysfunction.
Results from the Alberta Provincial Project for Outcomes Assessment in Coronary Heart Disease registry have been confirmed in other observational analyses. Together, these studies strongly suggest that older patients with multivessel coronary disease have higher survival rates if treated with optimal revascularization than if they are treated with medical therapy alone.
There is a growing body of literature to support the use of revascularization to reduce angina and improve functional outcomes of older patients with CHD. The Trial of Invasive Versus Medical Therapy in Elderly Patients with Chronic Symptomatic Coronary Artery Disease study randomized 305 patients with chronic angina, aged 75 or older, to diagnostic catheterization (followed by coronary revascularization as appropriate) or optimized medical therapy with intervention only for those with refractory symptoms. Of those randomized to catheterization and intervention as appropriate, 74% underwent CABG or PCI, while almost 33% of the conservative management arm crossed over to revascularization by 6 months. Patients in the early revascularization arm had significantly greater improvement in their symptoms, functional status and quality of life when compared with medically treated patients. However, there was a higher 6- month mortality rate but a lower incidence of nonfatal MI in the early
invasive arm. While this randomized study had a small sample size and presented several methodological challenges, its results provide support for the consideration of revascularization in the very old patient with CHD. The Objective Randomised Blinded Investigation With Optimal Medical Therapy of Angioplasty in Stable Angina (ORBITA) trial randomized patients with stable angina and evidence of severe single-vessel stenosis 1:1 to either PCI or a placebo sham procedure (n = 200), demonstrating that PCI did not result in greater improvements in exercise times or angina frequency. However, the trial had a medical therapy run in period, and was powered for exercise treadmill-based endpoints. Both the COURAGE trial and the ISCHEMIA trial demonstrated improvements in angina with revascularization compared with medical therapy alone. In ISCHEMIA, patients with angina at baseline who underwent revascularization had improved symptom relief and quality of life compared with optimal medical therapy. Patients with more frequent angina (daily or weekly) were also more likely to be angina free at 1 year (50% of patients) compared with those on medical therapy alone (20%).
Fibrinolysis and Primary PCI
The primary management objective in patients with STEMI is to provide early reperfusion therapy by pharmacologic means (ie, fibrinolysis) or percutaneous intervention. Numerous studies have confirmed that reperfusion therapy (fibrinolytic therapy or primary angioplasty) in patients presenting with STEMIs improves survival if delivered in a timely fashion. Despite this, older adults may present differently from younger adults with STEMI, more frequently having abnormal baseline ECGs and atypical symptoms that may be attributed to multifactorial causes. This is reflected in multiple studies demonstrating more frequently delayed treatment (> 90 minutes door- to-balloon time) among older individuals, including lower use of invasive cardiac procedures and primary PCI despite higher risk features in older patients. Non-White race, atypical symptoms, and heart failure are all significantly associated with prehospital delays. Given that older adults are at the highest risk for mortality, they likely derive the highest magnitude of treatment benefit from early revascularization.
An overview of the major thrombolytic trials from the Fibrinolytic Therapy Trialists’ Collaborative Group, a meta-analysis that included more than 58,000 patients, demonstrated a 15% relative risk reduction in death for patients 75 years or older with STEMI or bundle branch block treated with
fibrinolytics. Despite older patients achieving a smaller relative reduction in death than younger patients, the trend toward absolute benefit in terms of lives saved with fibrinolytics was threefold greater in patients older than 75 years compared with those younger than 55 years.
In addition, an observational analysis from the ACTION Registry– GWTG found that approximately 6% of patients with STEMI treated in the community were 85 years or older (median age 88). Compared to younger patients, the oldest old patients were more likely to be women, had more hypertension, and were more likely to have prior heart failure and stroke. More than 42% of the oldest old patients were also cited as having contraindications to reperfusion, but absolute or relative contraindications were only reported in 10%. Patient preference was the most common reason indicated (45%). Even in reperfusion-eligible patients, the oldest old patients were less likely to receive it with neither mortality benefit nor harm in those who did receive it. Similarly, in the Nationwide Inpatient Sample, STEMI patients coming from a nursing home were compared to those coming from the community. Compared with their community-dwelling counterparts, nursing home residents are less likely to receive reperfusion therapy for STEMI and had higher in-hospital mortality.
The possible benefits of thrombolysis must be weighed carefully against the risks, especially in the older population. Data from the Fibrinolytic Therapy Trialists’ meta-analysis showed that patients older than 70 years had nearly a threefold higher relative risk of intracranial hemorrhage, the most feared complication in the postlytic period, after fibrinolysis than those aged less than 60. Bearing this in mind, clinicians should realize that intracranial hemorrhage is a rare event and the absolute risk of this complication after fibrinolysis in those older than 70 years remains between 0.7% and 2.1% in major trials and nearing 3% in those older than 85 years. The risk factors for intracranial hemorrhage include low body weight, elevated blood pressures, facial or head trauma, and dementia. Dementia was found in one trial to significantly increase the risk for intracranial hemorrhage by threefold.
In contrast with mixed results for fibrinolysis, timely reperfusion for STEMI with PCI is almost universally associated with improved outcomes in all age groups. For example, a randomized study of primary PCI versus thrombolysis in patients with STEMI showed a 40% relative risk reduction for death, MI, and stroke in patients treated within 3 hours of presentation with PCI. In the ACTION Registry, primary PCI was associated with lower
30-day and 1-year mortality when compared to no therapy or thrombolysis among older patients with acute MI. Over the past decade, the use of primary PCI overall, and in older patients, has dramatically increased, in-hospital mortality has been reduced, and complications are lower with direct revascularization. However, when percutaneous intervention is not available in a timely manner, thrombolytic therapy may improve outcomes when given in the right window of time. Ultimately, an early invasive strategy aimed toward timely revascularization is a safe and effective approach for the majority of older adults presenting with acute MI with the purpose of improving survival.
SPECIAL CONSIDERATIONS
Multimorbidity and Frailty
The diagnosis and care of older CHD patients invokes an interplay of biological differences, comorbid conditions, functional status, drug pharmacology, and goals of care. The traditional approach of one disease at a time is of limited utility in the older population. Of Medicare beneficiaries, 68% have more than or equal to two chronic conditions, and 14% have more than or equal to six chronic conditions. Among those Medicare beneficiaries with a diagnosis of ischemic heart disease, 81% have hypertension, 69% have hyperlipidemia, 42% have diabetes, 41% have arthritis, 39% have anemia, 36% have heart failure, and 30% have chronic kidney disease.
Frailty is also a common occurrence in patients with CHD, including those presenting with acute MI, in part due to shared risk factors, and common end- organ manifestations. Frailty increases mortality and morbidity among those hospitalized with MI, with a twofold increased risk of mortality, rehospitalization, bleeding, stroke, or dialysis at 1 year. Similar increased risks are noted for frail elders following PCI and CABG. Frailty has been shown to be associated with longer hospital stays, higher rates of delirium, and increased resource utilization. Despite this increased risk, PCI still confers a survival benefit in frail older individuals presenting with acute MI and recent studies have not shown a significant difference in complication rates between frail and nonfrail older individuals. Thus, frail older adults with acute MI should safely undergo PCI assuming no other contraindications to treatment.
Guidelines are based on trials performed predominately in younger patients. Patients older than 75 years comprise approximately 9% of the population enrolled in clinical trials of ACS therapies, but account for more than 37% of patients with ACS in the community. Despite calls for inclusion, this trend has shown little improvement over the last two decades. There is good reason to believe that older adult populations with acute MI have key differences from younger populations that may impact treatment strategies and outcomes. Recently, the SILVER-AMI registry assessed older adults (≥ 75 years) presenting with acute MI, demonstrating a high prevalence of functional impairments, including deficits in cognition, strength, and sensory domains; interestingly, these functional impairments included some of the strongest predictors of 6-month mortality. In fact, hearing impairment, mobility impairment, recent weight loss, and lower patient-reported health status were all predictive of 6-month post-AMI mortality in this population. Given the current knowledge gaps as well as the inherent complexity of this population, the approach to cardiac care of the older patient requires a person-centered plan incorporating goals and health state.
Procedural Considerations
Despite the challenge of more complex anatomy from the right radial artery in particular, a transradial approach has a lower complication rate compared with the transfemoral approach (especially bleeding complications) and should be considered the first choice for arterial access in older patients.
While BMS have been historically considered in older frail patients in order to shorten DAPT duration and reduce bleeding risk, recent data suggest that treatment with DES remains preferable. In the XIMA (Xience or Vision Stents for the Management of Angina in the Elderly) trial, a randomized trial of everolimus-eluting stents versus BMS in octogenarians, DES were associated with a lower incidence of MI and target vessel revascularization without an increased risk of major hemorrhage. In another single-blind randomized trial of DES in older patients with CAD (SENIOR), a DES with short duration of DAPT improved the composite of all-cause mortality, MI, stroke, and ischemia-driven target lesion revascularization compared with BMS with a similar duration of DAPT. Thus, the use of DES is preferable to BMS in older patients who require PCI.
Role of Palliative Care
Treatment algorithms for CHD are ideally focused on symptom management
—aligned with the goals of palliative care—but use of palliative care itself remains low in cardiology treatment algorithms. There is a subset of patients with CHD with poor prognosis related to their coronary disease or other multimorbid conditions who benefit from palliative care, and palliative care has been increasing among individuals hospitalized with AMI (from 0.2% in 2002 to 3.0% in 2016) particularly those with cardiogenic shock (from 0.6% in 2002 to 14.0% in 2016). Older age is strongly associated with increasing odds of palliative care use. When goals of care prioritize comfort, palliative care also can be initiated in the outpatient setting to avoid future hospitalizations and invasive procedures. Ultimately, increasing uptake of palliative care for select patients with CHD will benefit from defining the optimal integration of palliative care into the CHD treatment paradigm in older adults.
Patient Preferences
Cardiac treatment plans need to consider the patients’ overall health, as well as their preferences and willingness to accept risk. While some older individuals engage in very active, independent lives well into their advanced years, others are frail and suffer disabling physical and/or mental illnesses.
Beyond this variability in health and functional status, there is great diversity in the health values of older patients. Some consider illness and disability to be inevitable and have no interest in extensive medical or surgical intervention. However, many older patients favor longevity if coupled with good cognitive ability and lack of disability. In hospital settings, many older patients feel vulnerable and abdicate decision-making to family or physicians entrusted to act in the patient’s best interest. Our group assessed the extent to which individual knowledge, preferences, and priorities explain lower use of invasive cardiac care among older versus younger adults presenting with acute coronary syndrome, demonstrating that age influences risk tolerance for CABG surgery, treatment goals and willingness to consider invasive cardiac care. It is incumbent on those caring for older patients to attempt to elicit preferences, while providing necessary information regarding potential risks and benefits of treatment options. Ethical mandates at the core of the shared decision-making include autonomy (goals of care) and nonmaleficence (do no harm).
Renal Function and Pharmacology
An individual’s renal function remains a powerful predictor of cardiovascular morbidity and mortality. In the Cooperative Cardiovascular Project, renal dysfunction predicted adverse outcomes among an older post- MI population, such that 1-year mortality was 24% if serum creatinine was below 1.5 mg/dL and 66% if creatinine was above 2.5 mg/dL. The pitfalls attributed to using the serum creatinine as a surrogate for renal function are often compounded in the older population. Consistent with recommendations from the Panel on Acute Coronary Care in the Elderly, the creatinine clearance should be calculated on all patients 75 years or older who present with an ACS. In addition, the clinician should remain cognizant of changes in the creatinine clearance during the index hospitalization and after discharge, as several medications prescribed may have an impact on renal function.
From the Global Registry of Acute Coronary Events study, a 10 mL/min decrease in creatinine clearance had the same impact on in-hospital mortality as a 10-year increase in age. The role of renal dysfunction in the management of older patients with ACSs cannot be overemphasized, as this entity plays a pivotal role at the interface of pharmacologic management.
Cardiovascular drugs are among the most commonly prescribed therapies in older patients, and altered pharmacokinetics (ie, drug distribution and metabolism) are frequently observed in older patients as a consequence of decreased lean body mass and volume of distribution. Combined, these factors lead to higher drug concentrations and prolonged half-lives. A drug’s pharmacodynamics (ie, the effect of a drug on a target cell) can be considerably altered with age. For example, increased calcification of the cardiac conduction system can increase an older patient’s sensitivity to atrioventricular nodal blocking agents and lead to profound bradycardia.
Comorbid illness and frailty can also influence drug selection and safety. For instance, a frail older person may have a higher risk of falling, which can markedly increase the likelihood of bleeding complications with anticoagulants. Finally, polypharmacy is often a serious risk in older patients and can lead to life-threatening drug–drug interactions and poor adherence because of confusion over medications and/or prohibitive costs.
SUMMARY
Despite advances in prevention and treatment, CHD remains a major health problem for older patients. As the population ages, the need for evidence- based cardiac care for patients aged 75 or older will increase substantially. Older patients benefit as much, if not more, from existing therapies as do younger patients. However, the care of CHD in older patients is in the context of their multidimensional health status and requires awareness of atypical presentations of ACS, altered pharmacokinetics of therapy, and underlying cognitive and functional status. Bearing this in mind, treatment paradigms applied to younger patient groups are often appropriate when treating older patients, and adherence to guidelines translates into better outcomes. The classification of physiologic frailty may offer additional risk information for older patients considering revascularization. This information could identify a cohort who may benefit from a try at medical therapy optimization or geriatric intervention before revascularization, or for whom alternate treatment or palliative care is the preferred route. Despite the significant challenges in cardiovascular care of the oldest old patients, redirecting efforts to a person-centered model may provide the best opportunity to improve outcomes that matter most.
FURTHER READING
Armitage J, Baigent C, Barnes E, et al. Efficacy and safety of statin therapy in older people: a meta-analysis of individual participant data from 28 randomised controlled trials. Lancet. 2019;393:407–415.
Bach RG, Cannon CP, Giugliano RP, et al. Effect of simvastatin-ezetimibe compared with simvastatin monotherapy after acute coronary syndrome among patients 75 years or older: a secondary analysis of a randomized clinical trial. JAMA Cardiol. 2019;4: 846–854.
Dodson JA, Hajduk AM, Geda M, et al. Predicting 6-month mortality for older adults hospitalized with acute myocardial infarction: a cohort study. Ann Intern Med. 2020; 172:12–21.
Elgendy IY, Elbadawi A, Sardar P, et al. Palliative care use in patients with acute myocardial infarction. J Am Coll Cardiol. 2020;75:113–117.
Gencer B, Marston NA, Im K, et al. Efficacy and safety of lowering LDL cholesterol in older patients: a systematic review and meta-analysis of randomised controlled trials. Lancet. 2020;396:1637–1643.
Grundy SM, Stone NJ, Bailey AL, et al. 2018 AHA/ACC/AACVPR/AAPA/ABC/ACPM/ADA/AGS/APhA/ASPC/NL
A/PCNA Guideline on the Management of Blood Cholesterol: A Report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines. Circulation. 2019;139: e1082–e1143.
Kumar S, McDaniel M, Samady H, Forouzandeh F. Contemporary revascularization dilemmas in older adults. J Am Heart Assoc. 2020;9:e014477.
Lowenstern A, Alexander KP, Hill CL, et al. Age-related differences in the noninvasive evaluation for possible coronary artery disease: insights from the Prospective Multicenter Imaging Study for Evaluation of Chest Pain (PROMISE) Trial. JAMA Cardiol. 2020;5:193–201.
McNeil JJ, Woods RL, Nelson MR, et al. Effect of aspirin on disability-free survival in the healthy elderly. N Engl J Med. 2018;379:1499–1508.
Nanna MG, Navar AM, Wang TY, et al. Statin use and adverse effects among adults >75 years of age: insights from the patient and Provider Assessment of Lipid Management (PALM) Registry. J Am Heart Assoc.
2018;7: e008546.
Nanna MG, Peterson ED, Wu A, et al. Age, knowledge, preferences, and risk tolerance for invasive cardiac care. Am Heart J. 2020;219:99–108.
Ofori-Asenso R, Ilomaki J, Tacey M, et al. Switching, discontinuation, and reinitiation of statins among older adults. J Am Coll Cardiol.
2018;72:2675–2677.
Orkaby AR, Driver JA, Ho YL, et al. Association of statin use with all-cause and cardiovascular mortality in US veterans 75 years and older. JAMA.
2020;324:68–78.
Ouchi Y, Sasaki J, Arai H, et al. Ezetimibe Lipid-Lowering Trial on Prevention of Atherosclerotic Cardiovascular Disease in 75 or Older (EWTOPIA 75). Circulation. 2019;140:992–1003.
Ouellet GM, Geda M, Murphy TE, Tsang S, Tinetti ME, Chaudhry SI. Prehospital delay in older adults with acute myocardial infarction: the ComprehenSIVe Evaluation of Risk Factors in Older Patients with Acute Myocardial Infarction Study. J Am Geriatr Soc. 2017;65: 2391–2396.
Rodriguez F, Maron DJ, Knowles JW, Virani SS, Lin S, Heidenreich PA. Association between intensity of statin therapy and mortality in patients
with atherosclerotic cardiovascular disease. JAMA Cardiol. 2017; 2:47– 54.
Sinnaeve PR, Schwartz GG, Wojdyla DM, et al. Effect of alirocumab on cardiovascular outcomes after acute coronary syndromes according to age: an ODYSSEY OUTCOMES trial analysis. Eur Heart J.
2019;41:2248–2258.
Tegn N, Abdelnoor M, Aaberge L, et al. Invasive versus conservative strategy in patients aged 80 years or older with non-ST-elevation myocardial infarction or unstable angina pectoris (after eighty study): an open-label randomised controlled trial. Lancet. 2016;387:1057–1065.
Thygesen K, Alpert JS, Jaffe AS, et al. Fourth universal definition of myocardial infarction (2018). J Am Coll Cardiol. 2018;72(18):2231– 2264.
Zhou Z, Ofori-Asenso R, Curtis AJ, et al. Association of statin use with disability-free survival and cardiovascular disease among healthy older adults. J Am Coll Cardiol. 2020;76(1):17–27.
Chapter
75
Valvular Heart Disease
Nikola Dobrilovic, Dae Hyun Kim, Niloo M. Edwards
INTRODUCTION
As the population ages, valvular heart diseases have become a significant public health problem. The prevalence of moderate or severe valvular heart disease increases with age, from less than 1% in 18- to 44-year-olds to 13% in the population 75 years or older. Without valve replacement, valvular heart disease is associated with decreased survival, functional limitations, and poor quality of life. Due to recent advances in surgical techniques, especially minimally invasive transcatheter valve procedures, older adults who were previously not considered for surgery are treated to improve survival and restore function and quality of life. However, challenges remain as to patient selection for surgical and transcatheter valve procedures, patient goal-directed shared decision-making, and optimization of health status prior to and after the procedure. This chapter summarizes latest evidence on evaluation and management of common valvular heart diseases in older adults, with a focus on the geriatrician’s role in risk assessment and shared decision-making.
AORTIC STENOSIS
Definition
Aortic stenosis is the progressive narrowing of the aortic valve resulting in left ventricular (LV) outflow obstruction during systole. This is in distinction to aortic valve sclerosis, where the valve leaflets are calcified or thickened, but do not cause a meaningful outflow obstruction.
Epidemiology
Aortic stenosis is present in 2% to 9% of older patients and is the leading clinically significant valvular disorder in older adults. Risk factors for developing aortic stenosis include age, a bicuspid aortic valve, and rheumatic heart disease. In 90% of patients older than 65 years, aortic stenosis is caused by calcific degeneration of a tricuspid aortic valve.
Although bicuspid valves are relatively common (~2% of the population), these patients present with stenosis earlier usually in the fourth to sixth decade of life. Similarly, rheumatic heart disease also presents earlier in life and often in association with concurrent mitral valve disease.
Learning Objectives
Describe clinical features, diagnostic modalities, and therapeutic options for common valvular diseases in older patients.
Identify when patients with valvular disease should be offered surgical intervention.
Key Clinical Points
Aortic stenosis is very common in older patients, and novel surgical approaches allow older patients a greater number of surgical treatment options.
Transcatheter aortic valve replacement (TAVR)—an alternative to surgical aortic valve replacement (SAVR)—may be considered in older patients.
Aortic insufficiency is managed similarly in younger and older patients (and currently may not be as well-suited for TAVR).
Mitral regurgitation may be structural or functional. Once symptomatic, it is better treated with mitral valve repair than replacement.
The primary treatment option for mitral stenosis is valvuloplasty. Surgical intervention is reserved for severely calcified valves and
Perform shared decision-making discussion about treatment options after considering each patient’s personal goals and surgical and geriatric risk factors.
is associated with high risk.
Anticoagulation and valve degeneration are the two important risks associated with mechanical and biological prostheses, respectively.
A multidisciplinary team approach is advocated to provide patient goal-directed care in managing older patients with valvular heart disease.
Pathophysiology
Although the causes of aortic valve calcification in aging are unclear, the process bears many similarities to atherosclerosis—both diseases are characterized by lipid deposition, inflammation, neoangiogenesis, and calcification. Bicuspid aortic valves are characterized by accelerated calcification and progressive outflow obstruction in the majority of patients. Rheumatic fever results in progressive fusion of the aortic valve leaflets causing both aortic valve stenosis and regurgitation. Aortic stenosis is classified as mild, moderate, or severe based on valve area, ejection velocity, and the pressure gradient that develops across the valve (Table 75- 1).
TABLE 75-1 ■ ECHOCARDIOGRAPHIC FINDINGS IN AORTIC STENOSIS
Aortic valve sclerosis (valve thickening without outflow tract obstruction) is present in 25% of patients older than 65 years and 48% of those older than 75 years, and is associated with male gender, hypertension, smoking, diabetes, and lipid abnormalities. The rate of progression to frank stenosis occurs in approximately 10% within 5 years. The Cardiovascular
Health Study has identified an increased incidence of adverse cardiovascular events in patients with sclerotic valves even when corrected for other cardiovascular risk factors. The mechanism for this association is unclear, and there are currently no guidelines for intervention.
On average, aortic valve stenosis progresses at an estimated increase in jet velocity of 0.3 m/s/year and a reduction in valve area of 0.1 cm2/year.
Despite these average rates of disease progression, the rate for each
individual is difficult to predict, therefore asymptomatic patients with mild- to-moderate disease should be followed on a regular basis.
Clinical Presentation
Aortic stenosis has a long asymptomatic latency period, when the only finding is a harsh, late-peaking, crescendo-decrescendo systolic murmur that radiates to the carotids and is best heard over the right, second interspace.
The second heart sound may be paradoxically split. Aortic stenosis is associated with “pulsus parvus et tardus,” characterized by a weak and diminished pulse with a late upstroke that is most easily noted in the carotids. However, these physical findings may be less obvious in older adults, because of the effects of aging on the vascular bed.
Patients with aortic valve stenosis develop compensatory LV hypertrophy, which can be seen on echocardiogram, on electrocardiogram, and even on chest x-ray. The ventricular hypertrophy produces coronary malperfusion with subendocardial ischemia. Older women are prone to develop excessive ventricular hypertrophy, which may contribute to the higher perioperative morbidity and mortality in this patient cohort.
Once symptoms develop after a long latency period, the progression to death is rapid (Figure 75-1). The three classic symptoms are angina, syncope, and heart failure. While sudden death occurs in patients with aortic stenosis and may be considered a fourth symptom group, this is rarely seen in asymptomatic patients (< 1%). Unfortunately, older adults often move into the symptomatic phase of aortic stenosis undetected because of the overlap of these major symptom constellations with other changes associated with aging (eg, reduced exercise tolerance).
FIGURE 75-1. Valvular aortic stenosis in adults. Average course (postmortem data). (Reproduced with permission from Ross J Jr, Braunwald E. Aortic stenosis. Circulation. 1968;38[1 Suppl]:61–67.)
Two-thirds of patients present with angina, which may be caused by concomitant coronary artery disease, although 40% do not have significant coronary artery disease. The most likely etiology for angina in the absence of coronary artery disease is subendocardial ischemia and the increased oxygen demands of the hypertrophied ventricle together with decreased coronary flow reserve. Untreated patients with aortic stenosis and angina have a 50% 5-year survival.
Syncope due to aortic stenosis may be caused by inadequate cardiac output to meet demands, by a dysfunctional LV baroreceptor response, or by arrhythmias, and is associated with a 50% 3-year mortality without valve replacement.
Aortic valve stenosis presenting with congestive heart failure carries the worst prognosis—50% mortality at 2 years without valve replacement.
Typical symptoms include paroxysmal nocturnal dyspnea, orthopnea, and dyspnea on exertion, which may be associated with signs of peripheral edema, pulmonary edema, and rales. Thickening of the left ventricle due to aortic stenosis as well as the changes associated with aging lead to diastolic dysfunction. Consequently, the older patient with aortic stenosis is more dependent on atrial contraction for ventricular filling. Therefore, these
patients often present with exacerbated or new onset of symptoms if they develop atrial fibrillation.
Impaired platelet function and decreased levels of von Willebrand factor are also associated with severe aortic stenosis, and 20% of patients may present with epistaxis or ecchymoses. Patients can also develop Heyde syndrome (gastrointestinal bleeding due to colonic angiodysplasias).
Interestingly, these abnormalities resolve with valve replacement.
Evaluation
The American Heart Association recommends evaluation of early systolic, mid-systolic grade 3 or greater, late systolic, or holosystolic murmurs with echocardiography. Older patients may present with ominous murmurs due to aortic valve sclerosis without significant valvular stenosis. Transthoracic echocardiography is the study of choice since it allows evaluation of valve morphology, severity of stenosis, and degree of LV hypertrophy and function. Echocardiography is also useful for following disease progression.
Stress echocardiogram can be utilized in asymptomatic patients with severe aortic stenosis to assess for physiologic changes that may indicate the need for earlier intervention. Computed tomography (CT) angiography is routinely used in patients being considered for transcatheter aortic valve replacement (TAVR) but is not helpful in assessing severity of aortic stenosis. Cardiac magnetic resonance imaging (MRI) is sometimes helpful in evaluating the severity of stenosis but is not widely used. Cardiac catheterization is routinely performed in older patients who are scheduled for valve replacement, in order to diagnose concomitant coronary artery disease and to measure transvalvular gradients if there is a question of severity of stenosis.
Management
Asymptomatic patie nts Survival of asymptomatic patients is the same as age- matched individuals without aortic stenosis. However, given the significant decline in survival once symptoms develop, it is essential to consider surgical intervention and to confirm the absence of symptoms in patients who do not appear symptomatic. If a careful history fails to elicit symptoms in patients with severe aortic stenosis, exercise testing may be considered.
However, exercise testing in symptomatic patients is contraindicated because of the high risk of severe hemodynamic compromise.
Patients who are asymptomatic by history but who, on exercise testing, develop symptoms, fail to generate a 20 mm Hg increase in blood pressure, or develop ST-segment abnormalities, have a 19% 2-year symptom-free survival compared with 85% for patients who do not manifest these abnormalities on exercise testing. Exercise testing may elicit symptoms in as many as a third of patients thought to be asymptomatic by history alone.
Close supervision and prompt termination of the study at any decline in blood pressure, significant ST-segment depression, or onset of arrhythmia is strongly advocated. On average, the probability of a patient with severe aortic stenosis remaining symptom-free at 5 years is only 50%, which has prompted some to recommend earlier surgery while the patient is “younger” and in better health.
If the patient is truly asymptomatic, continued frequent routine monitoring is reasonable, but patients should be instructed to report the development of angina, syncope, or any signs of congestive heart failure. Monitoring by echocardiography should include annual or biannual examinations for patients with severe aortic stenosis, whereas examinations should be performed every 1 to 2 years for patients with moderate aortic stenosis, every 3 to 5 years if the stenosis is mild, and as needed with referral for possible valve replacement if the patient develops symptoms. Patients who are demonstrated to be symptom-free need not restrict their activity and may exercise.
Symptomatic patie nts Once patients develop symptoms, they should be considered for valve replacement. Currently, there is no documented medical treatment that will delay or reverse aortic stenosis. Therefore, medical management is palliative and mainly reserved for patients with a remaining life expectancy less than 1 year even with a successful procedure or high chance of poor outcomes (death or no symptom reduction) due to advanced age, frailty, dementia, and other systemic conditions. Although standard guidelines for the management of hypertension are recommended, β-blockers are salutary in patients with concomitant coronary artery disease, and angiotensin-converting enzyme (ACE) inhibitors may have a beneficial effect in LV fibrosis. Diuretics are discouraged if the left ventricle is small due to the potential decrease in cardiac output. Statins have not demonstrated a regression of stenosis in randomized controlled trials, although they are indicated for patients with concomitant coronary artery disease or at high risk for atherosclerotic cardiovascular disease.
Aortic valve replacement—surgical aortic valve replacement (SAVR) or TAVR—should be considered for all symptomatic patients whose remaining life expectancy is at least 1 year (ie, no other significant life-limiting systemic disease) because of the improvement in both symptoms and survival. Current American College of Cardiology/American Heart Association recommendations are listed in Table 75-2. Due to the increased risk of sudden death, replacement should be performed as soon as feasible after the development of symptoms.
TABLE 75-2 ■ AMERICAN COLLEGE OF CARDIOLOGY/AMERICAN HEART ASSOCIATION RECOMMENDATIONS FOR VALVE REPLACEMENT IN AORTIC STENOSIS
Percutaneous aortic valvuloplasty Balloon aortic valvuloplasty (BAV) is not a substitute for valve replacement but it can be a useful tool in the treatment armamentarium for temporary palliation of symptoms for nonsurgical candidates or as a bridge for patients with hemodynamically unstable aortic stenosis. The procedure uses transvalvular balloon inflation to crack the
calcified aortic valve. Unfortunately, the maximum enlargement rarely exceeds 1.0 cm2 (severe-to-moderate aortic stenosis), carries a 10% risk of complications, and results in restenosis within 6 months to a year. One-year actuarial mortality is 35% to 50%, which is no better than untreated aortic
stenosis, but it can result in significant temporary relief of symptoms and improvement in quality of life and can be a useful tool to optimize acutely decompensated patients prior to TAVR or SAVR. BAV is often a component of TAVR, used immediately preceding valve deployment. Currently, though, the BAV step is being skipped in favor of using a single step deployment technique.
Surgical aortic valve replacement Given long-term durability data (rate of primary structural deterioration is approximately 10% after 15–20 years), SAVR with a bioprosthetic valve can be considered for patients older than 65 years. Age alone is not a contraindication to SAVR, as numerous studies have demonstrated outcomes in carefully selected older patients to be comparable to those seen in younger patients. Operative mortality in older patients ranges from 3% to 4% to as high as 24%, depending on patient selection. Medicare outcomes data for 142,000 patients older than 65 years demonstrate an operative mortality of 8.8% overall, and 6.0% mortality in high-volume centers.
Operative risk associated with SAVR should be assessed using the Society of Thoracic Surgeons (STS) Predicted Risk of Mortality or EuroSCORE II risk calculator. Predictors of surgical mortality include emergency surgery, right heart failure, severity of symptoms (New York Heart Association [NYHA] class IV), renal insufficiency, female gender, depressed LV function, associated coronary bypass, or concomitant mitral valve surgery. Emergency surgery increases the surgical risk substantially and is often the result of not referring the patient for elective surgery because it is “too risky,” but reconsidering surgery when the patient is critically ill and the medical options have dwindled to none. Unfortunately, the result is a self-fulfilling prophecy that older patients will not do well with surgery. A European study of older patients with aortic stenosis provocatively demonstrated that 41% of patients older than 70 years were not offered surgery despite severe valve stenosis and symptoms. These findings were corroborated in a second study of 1200 patients from 92 centers in 25 countries. In this study, 33% of patients older than 75 years with severe symptomatic aortic stenosis were not offered valve replacement.
Although many studies have found that concomitant coronary artery bypass or mitral valve surgery increases the surgical risk, there is clear support for performing concomitant aortic valve replacement for any patient with severe aortic stenosis, who is undergoing any other cardiac surgical procedure, regardless of symptoms. Similarly, in patients with moderate stenosis, it is “accepted practice” to replace the aortic valve at the time of other cardiac surgery. Simultaneous valve replacement in patients with mild aortic stenosis undergoing heart surgery is more controversial, although an argument can be made for concomitant replacement in patients with mild stenosis but moderate-to-severe valve calcification. However, some of the calculus of risk is changing with the availability of TAVR.
Transcathe ter aortic valve replacement An alternative to SAVR is TAVR, in which a catheter-mounted bioprosthetic valve is deployed across the aortic valve.
The valve can be introduced through the femoral artery, the apex of the heart, subclavian artery, carotid artery, or the aorta via small incisions.
Alternatively, peripheral arteries such as femoral and subclavian can be accessed percutaneously and controlled using minimally invasive closure devices such as Perclose Proglide (Abbott Cardiovascular, Plymouth, MN).
Randomized controlled trials demonstrated that TAVR, particularly transfemoral TAVR, is as effective as SAVR in symptomatic older patients who are considered low, intermediate, and high operative risk, with different procedural risks of complications. TAVR is associated with lower rates of postoperative stroke, major bleeding, and atrial fibrillation, as well as faster recovery. However, the rates of paravalvular leak, vascular complications, and pacemaker implantation are higher with TAVR. Because durability of TAVR valves beyond 5 years is unknown, some patients may need a reintervention (“valve-in-valve” procedure). In symptomatic patients with prohibitive SAVR risk, defined as STS Predicted Risk of Morbidity and Mortality of over 50% due to comorbid disease or serious irreversible conditions, TAVR reduces mortality, hospitalizations, and symptoms, but causes higher rates of stroke and vascular complications, compared to medical management with or without percutaneous aortic valvuloplasty.
Surgical versus transcathe ter aortic valve replacement The choice of SAVR versus TAVR should involve shared decision-making that carefully considers the patient’s age, preferences, procedure-specific risk factors or contraindications, procedural complications, and durability of the value relative to the patient’s remaining life expectancy (Table 75-3). A
multidisciplinary team approach is invaluable for optimal procedure selection (TAVR vs SAVR). This approach to individualized risk assessment is described later in this chapter.
TABLE 75-3 ■ FACTORS FAVORING SAVR, TAVR, OR PALLIATIVE CARE
AORTIC INSUFFICIENCY
Definition
Aortic insufficiency occurs when the aortic valve fails to close during diastole resulting in blood flow from the aorta back into the left ventricle.
Epidemiology
Acute aortic regurgitation is uncommon and presents a surgical emergency. Chronic aortic insufficiency occurs in 20% to 30% of individuals older than 65 years and like aortic stenosis has a long asymptomatic latency period.
However, even asymptomatic patients with normal LV function have a 0.2% incidence of sudden death, progress to symptomatic disease at a rate of approximately 3.5% per year, and develop either LV dysfunction or symptoms at a rate of approximately 6% per year. Once patients develop LV dysfunction, more than 25% each year will progress to symptomatic disease and, once symptomatic, the mortality rate for aortic regurgitation is more than 10% per year. Patients with NYHA class III or IV symptoms have an annual mortality rate of 25%, while patients with less severe symptoms (NYHA class II) have a 6% annual mortality rate.
Risk factors for the development of LV dysfunction, symptoms, or death include age, left ventricular end-systolic dimension (LVESD)/volume, left ventricular end-diastolic dimension (LVEDD)/volume, and LV ejection fraction with exercise. Each year 19% of patients with end-systolic size greater than 50 mm develop LV dysfunction and symptoms or die. The rate of development of these same end points was 6% per year for patients with
end-systolic size between 40 and 50 mm.
Presentation
The findings and eponyms associated with aortic insufficiency are a delight to lovers of medical trivia (Table 75-4). However, the most obvious physical findings are those of a diastolic murmur and a widened pulse pressure.
TABLE 75-4 ■ EPONYMOUS SIGNS OF AORTIC INSUFFICIENCY
Patients with acute aortic regurgitation usually present dramatically in cardiogenic shock. The most common causes are infective endocarditis and acute aortic dissection. Data suggest a dramatically large, recent increase in opiate intravenous drug abuse as a cause for endocarditis. This effect is nationwide, although it seems to be disproportionately seen in younger adults and much less in the geriatric population. However, the geriatric patient still tends to be affected by more traditional etiologies of endocarditis, such as infected foreign bodies (dialysis catheters and access, other catheters, intravenous pacing leads), dental infection, remote abscesses, sepsis, and pneumonia.
Chronic aortic regurgitation progresses slowly and insidiously.
Occasionally palpitations or awareness of each heartbeat may be the first
signs owing to the large regurgitant volumes. Infrequently angina may develop due to coronary flow mismatch, and as the ventricle fails congestive heart failure develops with symptoms of dyspnea on exertion, orthopnea, paroxysmal nocturnal dyspnea, and lower extremity edema.
Evaluation
Echocardiography is the diagnostic modality of choice for both initial evaluation as well as routine follow-up. It provides both diagnostic confirmation, assessment of the severity of valve regurgitation, as well as evaluation of LV function and the aortic root. It can also help determine the etiology of the aortic regurgitation (eg, infective endocarditis or aortic dissection). The clinical stages and management recommendations are defined by symptomatic status, severity of the regurgitation, LV volume, and LV systolic function (Table 75-5). Transesophageal echocardiography may improve sensitivity and specificity.
TABLE 75-5 ■ SEVERITY OF AORTIC REGURGITATION
Exercise testing may be reasonable for asymptomatic patients who wish to initiate an exercise regimen, but the results have not been consistently useful in predicting outcomes for asymptomatic patients with normal resting cardiac function. It may also be used to objectively assess exercise capacity in otherwise asymptomatic patients or patients with equivocal symptoms.
CT angiography is useful for the diagnosis and follow-up of patients with aortic dissections, aneurysms, or annuloaortic ectasia. Magnetic resonance angiography (MRA) similarly allows for evaluation and follow-up of aortic
disease and also provides a quantitative assessment of aortic regurgitation, which can be helpful in patients with suboptimal echocardiographic images or if there is discordance between clinical assessment and noninvasive studies.
Cardiac catheterization is routinely performed in patients being evaluated for aortic valve replacement, and provides intraventricular pressure measurements, but is not recommended for routine quantification of aortic regurgitation. It is contraindicated in acute aortic dissection or when there are large mobile vegetations on the aortic valve.
Management
Medical management Surgery is indicated for patients who develop either angina or signs of congestive heart failure, since the mortality for patients with angina is more than 10% per year and for heart failure is more than 20% per year. Medical management in symptomatic patients results in poor outcomes even if the LV function is normal. Patients older than 75 years are more likely to develop either symptoms or ventricular dysfunction at earlier stages of the disease, and have a poorer prognosis once they develop ventricular dysfunction.
The medical management of aortic insufficiency is best achieved with vasodilators that reduce afterload and wall stress. Since symptomatic disease carries such a poor prognosis without surgery, medical management is primarily indicated for patients who are not surgical candidates because of comorbidities, to preoperatively optimize hemodynamics, or for asymptomatic hypertensive patients with normal ventricular function.
Conflicting data exist as to the benefits of hydralazine, ACE inhibitors, and calcium channel blockers, which suggest that vasodilator therapy is not indicated for asymptomatic, normotensive patients with normal ventricular function. However, once symptoms or ventricular dysfunction develops, the patient should be considered and evaluated for surgery. Although β-blockers are not recommended as first-line medication for management of hypertension due to its bradycardic effect, they may benefit those patients who have LV dysfunction.
The absence of data indicating that exercise contributes to the progression of aortic insufficiency suggests, that the asymptomatic patient with normal LV function may participate in the full range of physical activities, with the exception of isometric exercises, which are
contraindicated. However, it is prudent to exercise-test patients to the anticipated level of planned activity to assess tolerance prior to initiating an exercise regimen.
Patients with aortic regurgitation should have regularly scheduled follow-up. Mild regurgitation with normal ventricular function can be followed clinically on an annual basis with biennial or triennial echocardiograms; more severe valvular regurgitation should be followed with annual or even biannual echocardiograms depending on the presence of ventricular dilatation (60 mm). Asymptomatic patients with more severe dilatation (> 70 mm) should be followed with echocardiograms every 4 to 6 months because the likelihood of developing symptoms or ventricular dysfunction is as high as 20% per year. Asymptomatic patients with normal LV function and dilated ventricles (LVESD > 50 mm) may be candidates for early valve replacement.
The hospital clinician should also be aware that while intra-aortic balloon counterpulsation is an excellent adjunct to medical therapy in the appropriate patient, it is contraindicated in patients with aortic insufficiency.
Surgical management Surgery is not indicated for asymptomatic patients with normal ventricular function and minimal ventricular dilatation regardless of the severity of valvular regurgitation. However, once symptoms or LV dysfunction develop in patients with severe aortic regurgitation, the patient should be considered for surgery (Table 75-6). Patients with severe LV dysfunction have a high operative mortality (at least 10%) and a lower postoperative survival; therefore, asymptomatic patients should be closely followed for the development of LV dysfunction.
TABLE 75-6 ■ RECOMMENDATIONS FOR SURGERY IN AORTIC INSUFFICIENCY
In older patients with severe compensated aortic insufficiency, the onset of symptoms can be hard to ascertain, since mild dyspnea on exertion and fatigue often mimic the effects of aging. However, once ventricular dysfunction develops, the older patient is more likely to have persistent postoperative ventricular dysfunction and symptoms, as well as decreased postsurgical survival. Therefore, in patients without comorbidities that contraindicate surgery, an earlier commitment to surgery is generally the preferred strategy.
If the patient is asymptomatic but the LVESD exceeds 65 mm, surgery should be considered if it is low risk, because of the high risk of sudden death. Although the surgical treatment of asymptomatic patients with severe aortic regurgitation and LV dilatation is controversial, if surgery is planned based on ventricular size or function, then two consecutive studies should confirm the findings.
Surgical valve replacement is the treatment of choice for symptomatic patients with aortic insufficiency. Patients with annuloaortic ectasia or concomitant ascending aortic aneurysms may be candidates for aortic valve repair and replacement of the ascending aorta (David procedure), although such procedures are usually reserved for experienced centers. This procedure should be judiciously used in older adults, in whom the operative risk is usually higher and aortic valve replacement alone is simpler and carries a more predictable result. Various new techniques in aortic leaflet repair and reconstruction have enjoyed recent popularity. They are mentioned here for completeness, although their application is likely more appropriate in a younger patient population. TAVR is currently not recommended in patients with aortic insufficiency.
MITRAL STENOSIS
Definition
Mitral valve stenosis is the progressive narrowing of the orifice of the mitral valve with a resultant increase in left atrial, pulmonary artery, and right ventricular pressures.
Epidemiology
The overwhelming majority of mitral stenosis is caused by rheumatic heart disease, which causes thickening and calcification of the leaflets and chordae as well as shortening of the chordae and fusion of the commissures. While this tends to occur in the younger patients and is rarely seen in older patients, the number of older patients may be increasing. In developed countries, most patients present in their forties and fifties, but some studies note that a third of patients are older than 65 years. Only 60% of patients presenting with mitral stenosis recall a history of rheumatic fever, the disease progresses very slowly, and it is estimated that mitral valve area, which normally
measures 4.0 to 5.0 cm2, decreases by 0.09 to 0.32 cm2/year. When the valve area is reduced to 2.5 to 1.5 cm2, patients usually develop symptoms.
Significant valvular disease lags the development of rheumatic fever by 20 to
40 years. However, once symptoms start, the 10-year survival is only 50% to 60%. Patients who are asymptomatic or with minimal symptoms have an 80% 10-year survival, and 60% have no progression of symptoms. Patients who meet criteria for surgery but do not get operated on have a 10-year
survival below 30%. Patients with severe pulmonary hypertension usually live fewer than 3 years, and most patients will die from progressive pulmonary hypertension, congestive heart failure, systemic emboli, pulmonary emboli, or infection.
Senile calcific mitral stenosis is becoming more common in the United States. This is usually associated with mitral annular calcification that extends into the leaflets and is prevalent in patients with decreased renal function, elevated inflammatory markers, and in patients with senile aortic stenosis. Although not all patients develop progressive stenosis, the rate of progression, when it occurs, is accelerated compared to rheumatic disease.
Other rarer causes of mitral stenosis include intracardiac clot, intracardiac tumor (such as myxoma), or congenital malformations.
Presentation
Mitral stenosis often presents with new-onset atrial fibrillation or an embolic event; sometimes patients come to medical attention because of fatigue or dyspnea and rarely due to hemoptysis. The left recurrent laryngeal nerve can be compressed by the enlarged left atrium, causing hoarseness (Ortner syndrome). The onset of atrial fibrillation sometimes results in pulmonary edema and death. On physical examination an opening snap may be noted to the first heart sound as well as a diastolic rumble.
Diagnosis
Evaluation of patients with suspected mitral stenosis includes an echocardiogram, both to confirm the diagnosis and assess the severity of the disease and therapeutic options (Table 75-7). Patients have mild mitral stenosis if the valve area is greater than 1.5 cm2, moderate if the valve area is 1.0 to 1.5 cm2, and severe if the mitral valve area is less than 1.0 cm2.
Additionally, pulmonary pressures should be assessed, since this also
determines the disease severity.
TABLE 75-7 ■ ECHOCARDIOGRAPHIC FINDINGS IN MITRAL STENOSIS
Transesophageal echocardiography is useful when transthoracic echocardiography provides limited images, when the presence of left atrial thrombus needs to be excluded, or when valvuloplasty is contemplated.
Cardiac catheterization is rarely used to facilitate assessment of mitral valve gradient. It is however recommended in assessing the coronaries as part of preoperative work-up, particularly in the older patient. Stress testing has a value when there is discrepancy between echocardiographic severity of mitral stenosis at rest and clinical symptoms. CT scanning is valuable in patients with senile mitral stenosis, especially those considered for surgical intervention as it complements the echocardiographic assessment of mitral annular and leaflet calcification.
Management
Medical management It should be emphasized that medical management cannot reduce a mechanical narrowing like mitral stenosis. However, increasing diastolic filling time by slowing the heart rate with β-blockade may be helpful in patients in sinus rhythm and exertional symptoms. The addition of sodium restriction and a diuretic ameliorates pulmonary edema. Antibiotic prophylaxis for infective endocarditis is reserved for patients at highest risk for developing infective endocarditis or experiencing complications (see General Considerations below). Asymptomatic patients should be followed closely for the development of symptoms, at which time they should be assessed by echocardiogram.
A substantial number of older patients (30%–40%) will present with atrial fibrillation. Both, age and left atrial size, are predictive of the developing atrial fibrillation. Unfortunately, atrial fibrillation carries a guarded prognosis, since only 25% of mitral stenosis patients with atrial fibrillation will survive 10 years compared to 46% of those who remain in sinus rhythm. Treatment includes anticoagulation, rate control, and electrical
or chemical cardioversion, especially if associated with hemodynamic instability. Patients who remain in atrial fibrillation for more than 24 to 48 hours are at increased risk of embolic complications and should be promptly anticoagulated. Electrical cardioversion may be used but only after confirming the absence of a left atrial thrombus by echocardiogram. If a thrombus is present, treatment may include 3 weeks of anticoagulation, followed by confirmation of the absence of thrombus by repeat echocardiography and subsequent defibrillation. In this setting, transesophageal echocardiography is the diagnostic tool of choice.
Patients with paroxysmal or persistent atrial fibrillation, prior emboli or left atrial thrombus should be anticoagulated. Systemic emboli occur in 20% of patients and age and atrial fibrillation are predictive of embolization.
Exercise is not contraindicated in asymptomatic patients with mild mitral stenosis. In patients with more severe stenosis, exercise is often limited by symptoms. Therefore exercise regimens for patients with more symptomatic or severe disease should be individually tailored.
Percutaneous mitral valvuloplasty Percutaneous mitral valvuloplasty is successful in select patients and often doubles the valve area with a substantial decrease in valve gradient. The selection of patients for this treatment option is determined by echocardiographic assessment of the valve, and is based on leaflet mobility, subvalvular apparatus, leaflet thickening, and the presence of calcification (Wilkins Score). The lowest scores are assigned to valves with the greatest leaflet mobility, the least subvalvular thickening, the most normal leaflet thickness, and the least calcium deposition. Patients with these valve characteristics (lowest scores) have the best response to balloon valvuloplasty. The majority of patients (90%) will see symptomatic relief, with a freedom from valve-related complications or death of between 50% and 65% at 7 years and as high as 80% to 90% in patients with favorable (low) preprocedural echocardiographic scores.
Symptomatic patients or patients with pulmonary hypertension, those with favorable echocardiographic mitral valve scores and without atrial thrombi, should be referred for mitral valvuloplasty. The risks for percutaneous mitral valvuloplasty are low (Table 75-8). Therefore, even patients with less favorable echocardiographic scores who are at high surgical risk may be considered candidates for this approach. However, balloon valvuloplasty is contraindicated in patients with moderate-to-severe mitral regurgitation and/or the presence of left atrial clot. Unfortunately, patients older than 65
years have a lower success rate, higher incidence of complications, and shorter duration of symptom relief with this approach.
TABLE 75-8 ■ COMPLICATIONS OF PERCUTANEOUS MITRAL VALVULOPLASTY
Surgical management As a result of the success of balloon valvuloplasty in patients with favorable valve morphology, surgery is usually indicated only if the patient has failed percutaneous intervention or if the valve has unfavorable characteristics (high Wilkins Score) for balloon valvuloplasty. Patients with mitral stenosis ineligible for valvuloplasty are best treated surgically, as are patients with left atrial thrombus. Additionally, since balloon valvuloplasty requires a significant level of expertise and the outcomes are related to experience, the American Heart Association recommends surgery if this experience is not available.
Patients with mild symptoms, severe pulmonary hypertension, and moderate-to-severe mitral stenosis may also benefit from surgery if balloon valvuloplasty is not appropriate or available. Similarly, patients with recurrent systemic emboli despite therapeutic anticoagulation may benefit from surgical intervention if ineligible for percutaneous treatment (Table 75- 9). Notably, surgery is not recommended for patients with isolated mild mitral stenosis.
TABLE 75-9 ■ SURGICAL RECOMMENDATIONS FOR MITRAL STENOSIS
The surgical options include open repair with commissurotomy or valve replacement with either a mechanical or a bioprosthetic valve. The surgical
risk increases with decreased preoperative functional status, older age, decreased cardiac function, pulmonary hypertension, and the presence of coronary artery disease. Operative mortality can be as high as 20% in the older patient with significant comorbidities and pulmonary hypertension. Nonetheless, it is not recommended to wait until the patient becomes severely symptomatic (NYHA class IV), since this results in a substantial increase in the surgical risk. Conversely, surgery should be considered despite severe symptoms, since both quality of life and survival are exceedingly poor without surgical intervention.
Patients with senile calcific mitral stenosis often present a surgical challenge, because the calcification involves the annulus and the base of the leaflets. Valve replacement is complex and often involves annular debridement with reconstruction, which significantly increases the operative risk. Therefore, intervention is often delayed until symptoms are severely limiting and cannot be managed medically.
Atrioventricular groove disruption is a rare but catastrophic complication of heart surgery. It is almost exclusively seen with mitral valve procedures and is most closely associated with two risk factors, age and mitral annular calcification.
MITRAL REGURGITATION
Definition
Mitral valve regurgitation is the inability of the mitral valve to close properly resulting in regurgitation of volume into the left atrium from the left ventricle during systole. It is important to distinguish between primary (organic) and secondary (functional) mitral regurgitation since this has implications in both treatment and prognosis.
Epidemiology
Significant mitral regurgitation occurs, equally in men and women— approximately 2% of the population. The most common cause is mitral valve prolapse, which is present in 1% to 2.5% of the population and can occur either spontaneously or as a familial disorder. The latter is associated with a low but significant incidence of sudden death, presumably due to ventricular arrhythmias.
Acute mitral regurgitation is caused by disruption of the valve apparatus (leaflet perforation, chordal rupture, or papillary muscle rupture) and is often
caused by endocarditis or myocardial infarction.
Chronic mitral regurgitation can be either primary (organic) regurgitation, which can be caused by mitral valve prolapse, rheumatic heart disease, endocarditis, or coronary artery disease. Secondary (functional) regurgitation is due to LV dysfunction and annular dilation.
Chronic regurgitation is better tolerated, but when severe, especially in association with a flail leaflet, is associated with a 7% per year mortality. Patients with severe mitral valve regurgitation and a low ejection fraction have a particularly poor prognosis. The 10-year survival for patients with ejection fractions less than 50% is 32%, compared to a 70% 10-year survival for patients with ejection fraction greater than 60%. Even patients with borderline normal ejection fraction (50%–60%) have a decreased 10- year survival (53%) if untreated.
Presentation
Patients with chronic mitral valve prolapse may present with palpitations, panic attacks, atypical chest pain, dyspnea, easy fatigue, volume overload, or congestive heart failure. Palpitations may be caused by the onset of atrial fibrillation. Cessation of caffeine, tobacco, alcohol, and other stimulants may help control the anxiety in some patients. Acute mitral regurgitation usually presents either with shock or respiratory distress.
Physical examination reveals a holosystolic murmur, which is best heard at the apex of the heart with radiation to the left axilla. If the mitral regurgitation is severe and the atrial and ventricular pressures start to equalize, the murmur may be diminished. With cardiac enlargement, a third sound may also be heard.
Evaluation
As with other cardiac valvular problems, the diagnosis is best confirmed and quantified by echocardiogram (Table 75-10), which allows assessment of the severity of regurgitation, putative causes of the valve dysfunction, as well as assessment of LV function. Transesophageal echocardiography often provides an even better assessment of the mitral valve. Cardiac catheterization, cardiac MRI, and viability studies can help identify those patients with functional ischemic MR who might benefit from surgical intervention.
TABLE 75-10 ■ SEVERITY OF MITRAL REGURGITATION
Management
Medical management Asymptomatic patients with mild primary mitral regurgitation may be followed with echocardiograms every 3 to 5 years while those with moderate mitral regurgitation should be followed every 1 to 2 years. Asymptomatic patients with severe organic regurgitation should probably undergo exercise testing to confirm the absence of symptoms, and if truly asymptomatic, should undergo restudy by echocardiogram every 6 to 12 months. The asymptomatic patient with organic mitral regurgitation, a normal ejection fraction without pulmonary hypertension, or LV dilation may exercise without restriction.
Medical management consists of blood pressure control with vasodilators and diuretics. Asymptomatic patients with normal blood pressure and LV function do not require treatment, and endocarditis prophylaxis is recommended for all patients with mitral valve prolapse and patients with moderate or severe organic mitral regurgitation.
Surgical management The surgical options consist of either repair or replacement (Table 75-11). All patients should be considered for valve repair because of the marked improvement in survival, LV function, and the
avoidance of long-term anticoagulation with valve repair compared to replacement. Although durability of repair is excellent, with freedom from reoperation equaling that of valve replacement (7%–10% at 10 years), the freedom from reoperation is dependent on the adequacy of the repair, whether the repair involved the anterior or the posterior valve leaflets, and whether chordal replacement was necessary. Patients with isolated posterior leaflet pathology are more likely to have long-term success compared to patients with anterior or bileaflet repairs.
TABLE 75-11 ■ SURGICAL RECOMMENDATIONS FOR MITRAL REGURGITATION
When mitral replacement is necessary, the procedure should strive to retain as much of the mitral valve apparatus as possible. Preservation of these structures results in improved LV function, exercise tolerance, and survival. The choices for mitral valve replacement include bioprosthetic or mechanical valves. Bioprosthetic valves in the mitral position are not as durable as in the aortic position; however, this option avoids the risks of
lifelong anticoagulation. Conversely, a mechanical valve has the advantage of durability but requires a commitment to lifelong anticoagulation.
Mitral valve repair or replacement is indicated for all patients with symptoms (NYHA class II–IV) and severe regurgitation even in the face of normal cardiac size and function. Asymptomatic patients benefit from surgical intervention if they develop atrial fibrillation, pulmonary hypertension, LV dysfunction (ejection fraction < 60%), or ventricular dilation (> 40 mm). In patients with atrial fibrillation, an intraoperative maze or modified maze procedure combined with suture closure of the left atrial appendage should be considered, in order to reestablish sinus rhythm and possibly reduce the risk of systemic embolization, respectively. Once LV dysfunction develops, patient survival, even after repair or replacement, is compromised. Therefore, patients who are asymptomatic, with normal LV size and function, should be followed closely, and, if necessary, exercise testing should be considered to confirm the absence of symptoms. Surgical intervention may be warranted for asymptomatic patients with severe mitral regurgitation with none of the noted indications for surgery if and only if the mitral valve can be repaired.
Operative mortality varies based on the procedure. Mitral valve repair is associated with a 2% perioperative mortality compared to 6% for valve replacement. Patients with ischemic and functional mitral regurgitation do much worse than the patients with organic regurgitation. A study of 292 patients older than 70 years demonstrated an in-hospital mortality of 0.7% for mitral repair compared to 14% for replacement. A study comparing cohorts of patients older than 75 years, between 65 and 75 years, and younger than 65 years demonstrated an increased operative risk for older patients. However, restoration of life expectancy following surgery is the same for older as for younger patients. Current data suggest that in patients with functional mitral regurgitation there is no improvement in survival by performing a concomitant mitral repair with coronary artery bypass.
Nonetheless, there may be an improvement in postoperative symptoms.
Percutaneous options Several percutaneous techniques for repair or replacement of the mitral valve are under investigation. Percutaneous approaches to the mitral repair include a clip that provides an edge-to-edge repair for both functional and organic mitral regurgitation. Other repair devices include a mitral annular constraint device placed into the coronary sinus and artificial cord implantation. There are several other repair devices as well as
transcatheter valves that are at different phases of development that will broaden the armamentarium of treatment options available for the next generation of older patients.
GENERAL CONSIDERATIONS
Evaluation of Surgical Risk
Predicted risk of mortality (Table 75-12) Many older patients have multiple comorbidities that impact risk of surgery and the decision to operate. There are, however, a number of statistical models that can be helpful in weighing the impact of individual comorbidities on operative outcome. The two most widely used are the Society of Thoracic Surgeons Predicted Risk of Mortality (STS-PROM) score and the EUROpean Score for Cardiac Operative Risk Evaluation (EuroSCORE-II), which is derived from a European surgical population. The STS-PROM, derived from a rolling cohort of North American patients, analyzes the impact of preoperative variables of patients undergoing coronary artery bypass surgery and valve surgery (with or without concomitant coronary artery bypass) on 30-day mortality and postoperative complications. It is important to recognize that these data are derived only from patients who were operated on, and therefore do not provide insight into patients who were considered for surgery but did not undergo an operation. These risk models, though very informative, should not be relied on as the only indicator for surgery.
TABLE 75-12 ■ PREOPERATIVE RISK ASSESSMENT
Frailty Traditionally physicians have guesstimated the probability of patient survival based on clinical judgment. In an attempt to quantify these parameters there has been an increasing study of frailty as a measure of survivability. For example, one frailty index that incorporates a combination of five domains (nutritional status, activity, mobility, strength, and energy)
has been evaluated for its ability to predict postsurgical survival (see Chapter 42).
Multidisciplinary team approach As diagnostic methods and treatment options become more sophisticated, and patients present with more comorbidities, the optimal therapeutic choice is more complex and less clear. Assessing patients using a formal “heart valve team” has proven to improve outcomes. The heart valve team is comprised of a multidisciplinary team of clinicians that include cardiologists, cardiac surgeons, structural valve interventionalists, cardiovascular imaging specialists, anesthesiologists, nurses, and often a geriatrician. The team reviews the patient and collaboratively discusses the therapeutic options, and then works with the patient and their family to arrive at a decision tailored for each individual patient that is consistent with the patient’s goals of care.
Endocarditis Prophylaxis
Although there is surprisingly a dearth of data supporting or refuting the use of antibiotic prophylaxis for patients with valvular disease, the American Heart Association recommends antibiotic coverage for a variety of dental and surgical procedures.
Prophylactic antibiotics are recommended for patients with prosthetic cardiac valves, prior endocarditis, cardiac transplants with abnormal valves, and complex congenital repairs or defects (Table 75-13). Standard antibiotic prophylaxis is orally administered, penicillin-based, and given 1 hour before the procedure, but other regimens may be needed (see Table 75-14). Patients with repaired valves usually do not require endocarditis prophylaxis as the incidence of infective endocarditis is estimated to be very low. Patients at risk of endocarditis should be covered for dental procedures that involve manipulation of gingival tissue, the periapical region of teeth, or perforation of the oral mucosa. Nondental procedures in absence of active infection do not require prophylaxis.
TABLE 75-13 ■ INDICATIONS FOR ENDOCARDITIS PROPHYLAXIS
TABLE 75-14 ■ ENDOCARDITIS PROPHYLAXIS ANTIBIOTIC RECOMMENDATIONS
Prior to any valve operation, all patients require dental clearance, and any potential infectious dental issues should be addressed prior to proceeding with surgery. When dental extractions are required, an interval of time is provided for recovery and to avoid bleeding before proceeding with surgery.
Anticoagulation
Anticoagulation has a significant associated morbidity particularly in the older patient. Consequently, the need for anticoagulation plays a pivotal role in the choice of valve. Patients with mechanical valves require lifelong anticoagulation, while patients with bioprosthetic valves are often anticoagulated for only 3 months (Table 75-15). If there is no contraindication to antiplatelet therapy, low-dose aspirin is also recommended for all patients with valve replacement.
TABLE 75-15 ■ ANTICOAGULATION FOR PROSTHETIC VALVES
Direct-acting oral anticoagulation agents are not approved for anticoagulation after mechanical valve replacement—therefore, chronic
anticoagulation requires warfarin therapy. The recommended international normalized ratio (INR) for patients with mechanical aortic valves is between 2 and 3, although certain mechanical valves allow lower INRs of 1.5 to 2.0 after 3 months. However, if the patient has a history of prior thromboembolism, LV dysfunction, atrial fibrillation, or hypercoagulability, the INR should be maintained between 2.5 and 3.5. Patients with mechanical mitral valves should have their INR maintained between 2.5 and 3.5, and those with bioprosthetic valves are often anticoagulated (INR 2.0–3.0) for the first 3 months following implantation.
The risk of thromboembolism for anticoagulated patients with mechanical valves is approximately 1% to 2% per year. The risk is lower in bioprosthetic valves (0.7%), and lower in patients with aortic prosthetic valves compared to mitral valves, regardless of the type of prosthetic valve implanted.
Hemorrhagic complications are more likely if the INR is greater than 5. Patients with an INR between 5 and 10 can be treated by holding warfarin and administration of 1 to 2.5 mg of oral vitamin K. However, the INR should be monitored daily until the INR is below 5, at which time warfarin can be reinitiated at adjusted doses. Of note, it is often harder to manage anticoagulation in the older patient as a result of polypharmacy in this patient population. An acute reduction in the INR for patients who are actively bleeding may be achieved by administering intravenous fresh frozen plasma. Vitamin K can also help reduce a dangerously high INR, but complicates reanticoagulation.
Temporary cessation of anticoagulation in patients with mechanical valves is sometimes medically necessary. Patients with mechanical aortic valves (without risk factors) can have warfarin held 48 to 72 hours preoperatively and restarted within 24 hours following surgery without the need for bridging heparin anticoagulation. However, patients with mechanical mitral or mechanical aortic valves and high-risk factors should be bridged with heparin when the INR falls below 2. The heparin may be held 4 to 6 hours before surgery and restarted as soon as possible when the immediate postoperative risk of bleeding allows. For emergency procedures, it is preferable to administer fresh frozen plasma to reverse the effects of warfarin, since the administration of vitamin K will make reanticoagulation difficult and increases the risk of a hypercoagulable state.
Prosthetic Valve Choices
Traditional, open-surgical, prosthetic valves fall into two broad groups: biological and mechanical valves. Mechanical valves have the advantage of durability but the disadvantage of requiring lifelong anticoagulation; bioprosthetic valves do not require anticoagulation but are limited by a finite durability. Of note, immediately following surgery, many surgeons will administer anticoagulants (or aspirin) for a limited interval (most commonly 3 months).
Notably the risk of embolization and the durability is determined, in part, by valve location. Biological aortic valves are more durable than the same valve in the mitral position, and mechanical valves in the aortic position have a lower risk of thromboembolism than in the mitral position.
Within each class of valve, mechanical and bioprosthetic, there are numerous types of prostheses; each type of valve is available in different forms (eg, porcine vs bovine pericardial or bileaflet vs tilting disc).
The choice of replacement valve is sometimes determined by the contraindication to anticoagulation (which would necessitate the implantation of a bioprosthetic valve). Otherwise, the choice resides with the patient.
There are some data to suggest that the rate of bioprosthetic valve deterioration is attenuated in older patients, prompting many surgeons to recommend bioprosthetic aortic valves for patients older than 65 years and bioprosthetic mitral valves for patients older than 70 years. Valves such as On-X (Cryolife, Kennesaw, Georgia, USA) retain all the durability benefits of a mechanical valve while requiring considerably less anticoagulation (INR 1.5 to 2.0 with aspirin after 3 months).
Other indirect factors may impact the choice of valve: atrial fibrillation, multiple valve replacement, prior mechanical valve, prior cardiac surgery, and annulus size may argue in favor of a mechanical valve. Essentially, the risk of a mechanical valve is that of anticoagulation and embolization, while the risk of a bioprosthetic valve is that of valve failure and reoperation. In the end, the choice of valve—unless there are contraindications to anticoagulation—belongs to the patient, who ultimately must live with the perils of anticoagulation or the threat of reoperation.
Mechanical valves The original mechanical valve was the ball-caged design, which, while durable, had inefficient flow characteristics and required higher levels of anticoagulation than the current generation of valves. The bileaflet mechanical valve is the most commonly used valve in the aortic
position because of its superior flow characteristics. The risk of thromboembolism with anticoagulation is approximately 1% to 2% per year.
Bioprosthe tic valves Stented and nonstented porcine valves and bovine pericardial valves are available, and like homografts, do not require immunosuppression or anticoagulation. The risk of embolism for this class of valves is approximately 0.7% per year without anticoagulation. However, all the biological valves are prone to structural deterioration. The rate of deterioration is slower in older patients—at 15 to 20 years, patients at age 70 have a 90% freedom from structural valve deterioration and patients older than 75 years have freedom from reoperation of 90% to 95%. Stentless valves do not have the valve mounting and are therefore more hemodynamically efficient, but this does not increase survival in the older patient. Minimal aortic gradients can be achieved regularly with this type of valve.
Aortic homografts Cadaveric valves do not provide improved durability but are particularly useful in patients with endocarditis and tissue loss. The rate of thromboembolism is low, and, like the stentless porcine valves, they are hemodynamically efficient especially at small sizes. Although there is no need for antirejection medications, the valve has a propensity to become heavily calcified, making re-replacement much more challenging.
Pulmonary Valve Autotransplant (Ross Procedure)
Mr. Donald Ross devised an operation to excise the patient’s own pulmonary valve, which is used to replace the aortic valve, and then to replace the pulmonary valve with a homograft or bioprosthetic valve. Conceptually, the lower-pressure pulmonary circuit will allow for longer durability of the homograft or bioprosthetic valve in this circuit, and the aortic valve, which is now an autologous valve, would, therefore, also have increased durability. The operative morbidity and mortality for this procedure, especially in inexperienced hands, are higher than bioprosthetic aortic valve replacement. The increased procedural risk and limited benefit in an older patient make it rarely indicated in this population. A variation of this operation is also available for mitral replacement, but it is currently investigational and has the same limitations for use in an older patient.
Transcutaneous valves In the aortic position, various options are well- established and may be viewed from two broad categories, “balloon-
expandable” and “self-expanding. No approved transcutaneous mitral valve currently exists, though multiple, multiple startup ventures are competing to fill this void. It is anticipated that a transcatheter option for the mitral valve will be available in the relatively near future.
Suture less (aortic) valves Several “sutureless” valves are also currently available and offer a hybrid option. These valves are implanted through an open surgical approach and, therefore, still require cardiopulmonary bypass support. However, various features allow for a much quicker implantation— with no or considerably less suture—and a correspondingly larger aortic valve annular orifice area.
Valve repair Repair of the aortic or mitral valves is ideal. The ability to maintain ventricular geometry, to accommodate natural annular motion and the durability makes mitral valve repair the best option for suitable patients, since these advantages translate to lower operative mortality for mitral repair compared to replacement—1% to 2% versus 5.4% to 6.4%, respectively. Aortic valve repair for calcific disease is less durable and is rarely indicated; however, in the setting of normal leaflets, the aortic valve is repairable with excellent results (85% freedom from reoperation at 10 years).
FURTHER READING
Afilalo J, Lauck S, Kim DH, et al. Frailty in older adults undergoing aortic valve replacement: the FRAILTY-AVR Study. J Am Coll Cardiol.
2017;70(6): 689–700.
Bonow RO. Chronic mitral regurgitation and aortic regurgitation. J Am Coll Cardiol. 2013;61(7):693–701.
Carabello BA. Clinical practice. Aortic stenosis. N Engl J Med.
2002;346(9):677–682.
Carabello BA. Modern management of mitral stenosis. Circulation.
2005;112(3):432–437.
Carabello BA. The current therapy for mitral regurgitation. J Am Coll Cardiol. 2008;52(5):319–326.
Enriquez-Sarano M, Tajik AJ. Clinical practice. Aortic regurgitation. N Engl J Med. 2004;351(15): 1539–1546.
Kim DH, Afilalo J, Shi SM, et al. Evaluation of changes in functional status in the year after aortic valve replacement. JAMA Intern Med.
2019;179(3):383–391.
Leon MB, Smith CR, Mack M, et al. Transcatheter aortic-valve implantation for aortic stenosis in patients who cannot undergo surgery. N Engl J Med. 2010;363(17): 1597–1607.
Otto CM, Nishimura RA, Bonow RO, et al. 2020 ACC/AHA Guideline for the Management of Patients With Valvular Heart Disease: A Report of the American College of Cardiology/American Heart Association Joint Committee on Clinical Practice Guidelines. J Am Coll Cardiol.
2021;77(4):e25–e197.
Rodes-Cabau J, Mok M. Working toward a frailty index in transcatheter aortic valve replacement. JACC Cardiovasc Interv. 2012;5(9):982–983.
Ross J Jr, Braunwald E. Aortic stenosis. Circulation. 1968;38(1s5):V61– V67.
Smith CR, Leon MB, Mack MJ, et al. Transcatheter versus surgical aortic- valve replacement in high-risk patients. N Engl J Med.
2011;364(23):2187–2198.
Chapter
76
Heart Failure
Mathew S. Maurer, Scott L. Hummel, Parag Goyal
INTRODUCTION
Heart failure is a complex clinical syndrome that can result from any structural or functional cardiac disorder that impairs the ability of the ventricle to fill with or eject blood. Heart failure is not a single disease but rather a syndrome, similar to falls and incontinence that have a diverse set of etiologies and multiple underlying mechanisms. Heart failure is among the most common cardiovascular conditions experienced by older adults due to a combination of normative age-related changes in cardiovascular structure and function as well as the rising prevalence of cardiovascular risk factors and diseases with advancing age and decline in premature cardiovascular deaths. Thus, although the clinical syndrome of heart failure has been recognized by physicians for more than two centuries, it has only been within the past four decades that it has been identified as a major public health concern, which is largely attributable to the aging of the population.
EPIDEMIOLOGY AND ECONOMIC IMPACT
Despite declines in age-adjusted mortality rates from coronary heart disease and stroke, both the incidence and the prevalence of heart failure are increasing, and these trends are projected to continue for the next several decades. As shown in Table 76-1, several factors have contributed to the rise in heart failure cases. Foremost among these is the increasing number of older adults who, by virtue of age-related changes in cardiovascular structure and function coupled with the high prevalence of hypertension, coronary heart disease, and valvular disease with advancing age, are
predisposed to the development of heart failure. In addition, advances in the treatment of other acute and chronic cardiac and noncardiac conditions, most notably atherosclerotic heart disease, hypertension, renal failure, cancer, and infectious diseases, have paradoxically contributed to the increasing burden of heart failure. Indeed, individuals who might have died in middle age from acute myocardial infarction during a prior era are now surviving to older age and developing heart failure in their later years. Similarly, improved blood pressure control has led to a 60% decline in stroke mortality, yet these patients remain at risk for the development of heart failure due to hypertension and left ventricular hypertrophy.
TABLE 76-1 ■ FACTORS CONTRIBUTING TO THE RISING INCIDENCE AND PREVALENCE OF HEART FAILURE
Learning Objectives
Understand the effects of aging on cardiovascular structure and function, and how these changes predispose to the development of heart failure.
Describe the clinical features—including symptoms, signs, and results of diagnostic tests—that distinguish heart failure in older adults from heart failure occurring during middle age.
Delineate nonpharmacologic aspects of care for older adults with heart failure.
Understand current treatment of heart failure with reduced and preserved ejection fraction in older adults.
Discuss management of heart failure in patients approaching the end of life.
Key Clinical Points
Cardiovascular aging is associated with significant changes in cardiac and vascular structure and function that predispose older adults to the development of heart failure.
The clinical features of heart failure, including symptoms, signs, and diagnostic test results, often differ in older adults with heart failure compared to those in younger patients.
Management of heart failure with reduced ejection fraction (HFREF) is generally similar in older and younger patients, but must be individualized in older patients given potentially reduced life expectancy and heterogeneity in patient priorities.
Although trials of many cardiovascular pharmacologic agents have not consistently found reduced mortality or substantially improved clinical outcomes in patients with heart failure and preserved ejection fraction (HFPEF), there has been some progress in recent trials. Nevertheless, effective treatment of this condition remains challenging.
Nonpharmacologic therapies, including lifestyle changes (eg, dietary interventions and exercise) and multidisciplinary care interventions, play a fundamental role in optimizing care and outcomes for older patients with heart failure.
The overall prognosis for heart failure in older adults is poor, and it is therefore essential to incorporate goals of care and end-of- life planning into the clinical decision-making process, especially as symptoms progress and quality of life declines.
Heart failure affects approximately 6.5 million Americans, and it is projected that by 2030 the prevalence of heart failure in the United States will exceed 8 million, largely due to the aging of the population. In addition, over 1 million new cases are diagnosed each year. Moreover, both the incidence and the prevalence of heart failure are strikingly age dependent (Figures 76-1 and 76-2). Indeed, heart failure prevalence doubles for each decade after 40 years of age and exceeds 10% in both men and women older than 80 years. Similarly, heart failure mortality rates increase exponentially with advancing age in all major demographic subgroups of the US population.
FIGURE 76-1. Incident heart failure hospitalizations in the United States by age, gender, and self-reported race, 2005–2011: the Atherosclerosis Risk in Communities Study. (Reproduced with permission from NHANES, 2013 to 2016. National Heart, Lung, and Blood Institute. US Department of Health & Human Services.)
FIGURE 76-2. Prevalence of heart failure in the United States by age and gender: National Health and Nutrition Examinations Survey, 2009–2012. (Reproduced with permission from NHANES, 2013 to 2016. National Heart, Lung, and Blood Institute. US Department of Health & Human Services.)
Heart failure is also a major source of chronic disability and impaired quality of life in older adults, and it is the leading cause for hospitalization in individuals older than 65 years. In 2014, there were 1 million hospital admissions in the United States with a primary diagnosis of heart failure (Table 76-2). Of these, 71% were in patients older than 65 years, 53% were in patients 75 years or older, and 25% occurred in the 2% of the population aged at least 85 years (Figure 76-3). While the majority of heart failure patients younger than 65 years are males, women comprise more than half of heart failure hospitalizations after the age of 65, and the proportion of females continues to rise with advancing age. The prevalence of heart failure in older Caucasians and African-Americans is similar, and hospital admission rates are lower in Hispanics and Asians. Whether this represents a true difference in population prevalence or a difference in the likelihood that affected individuals will seek or receive medical attention is unknown. Heart failure is also a common reason for ambulatory care visits, with almost 2 million physician office visits with a primary diagnosis of heart failure occurring in 2016. In this regard, heart failure ranks second only to hypertension among cardiovascular reasons for outpatient physician visits.
TABLE 76-2 ■ EPIDEMIOLOGY OF HEART FAILURE IN THE UNITED STATESA
FIGURE 76-3. Distribution of hospitalizations for heart failure in the United States by age, 2000–2010. (Reproduced with permission from CDC/NCHS, National Hospital Discharge Survey, 2000–2010. US Department of Health & Human Services.)
As a result of its high prevalence and the need for intensive resource use in both the inpatient and the outpatient settings, the economic burden of heart failure is very high. Heart failure is one of the costliest diagnosis-related groups in the United States, with estimated total annual expenditures in excess of $35 billion. Projections suggest that by 2030, the total cost of heart failure will increase to $69.8 billion, amounting to ≈$244 for every US adult.
PATHOPHYSIOLOGY
Heart failure is the prototypical disorder of cardiovascular aging in that age- related changes in the cardiovascular system in concert with an increasing prevalence of cardiovascular risk factors and diseases at older age conspire to produce an exponential rise in heart failure prevalence with advancing age.
Aging is associated with extensive changes in cardiovascular structure and function (see Chapter 73). However, in the absence of coexistent cardiovascular disease, cardiac function at rest is well preserved even at very old age. Resting left ventricular ejection fraction and resting cardiac output are largely unaffected by age in healthy individuals.
From the clinical perspective, the changes associated with cardiovascular aging result in an impaired ability of the heart to respond to stress, be it physiologic (eg, exercise) or pathologic (eg, hypertension or myocardial ischemia). Four principal changes in the cardiovascular system contribute directly to the heart’s attenuated capacity to augment cardiac output in response to stress. First, aging is associated with reduced responsiveness to β-adrenergic stimulation. This is related to increased sympathetic nervous system activity and circulating catecholamine levels resulting in β-adrenergic receptor desensitization, rather than decreased β- receptor density on cardiac myocytes or altered responsiveness to intracellular calcium. The diminished response to β-adrenergic stimulation limits the heart’s capacity to maximally increase heart rate and contractility in response to stress, and β2-mediated peripheral vasodilatation is also impaired.
A second major effect of aging is increased stiffness of the large- and medium-sized arteries, primarily because of increased collagen deposition and cross-linking and degeneration of elastin fibers in the media and adventitia. Increased stiffness of the central conduit arteries results in increased impedance to left ventricular ejection (ie, increased afterload), and it also contributes to the increased propensity of older individuals to develop systolic hypertension (Figure 76-4).
FIGURE 76-4. Age-related changes in central conduit arteries lead to numerous physiologic changes. (Reproduced with permission from Lakatta EG. Cardiovascular regulatory mechanisms in advanced age. Physiol Rev. 1993;73[2]:413–467.)
A third major effect of aging is altered left ventricular diastolic filling. Diastole is characterized by four phases: isovolumic relaxation, early rapid filling, passive filling during mid-diastole, and late filling owing to atrial systole. The first two phases, isovolumic relaxation and early rapid filling, are largely dependent on myocardial relaxation, an active, energy-requiring process, whereas filling during the latter two phases is governed principally by intrinsic myocardial “stiffness,” or compliance. Aging is associated with impaired calcium release from the contractile proteins and reuptake by the sarcoplasmic reticulum, inhibiting early diastolic relaxation. In addition, increased interstitial connective tissue content and collagen cross-linking reduce ventricular compliance. Compensatory myocyte hypertrophy in response to increased ventricular afterload and myocyte loss due to apoptosis further compromises left ventricular compliance. Thus, normal aging is associated with important changes, adversely impacting all four phases of diastole and substantially altering the pattern of left ventricular diastolic filling.
Age-related changes in diastolic filling and atrial function can be evaluated noninvasively using Doppler echocardiography to examine diastolic inflow across the mitral valve (Figure 76-5). In healthy young persons, the transmitral inflow pattern is characterized by a large E-wave,
with a rapid upstroke representing rapid filling of the ventricle immediately following the opening of the mitral valve and corresponding to active ventricular relaxation (Figure 76-5A). This is followed by a period in which the rate of filling slows (the downslope of the E-wave, called the deceleration time), mid-diastolic diastasis (in which left atrial and left ventricular pressures are essentially equal), and additional left ventricular filling at the end of diastole corresponding to atrial contraction (the A-wave, or atrial “kick”). Importantly, the majority of ventricular filling occurs in the first half of diastole in young individuals, with a relatively small contribution from atrial contraction.
FIGURE 76-5. Schematic diagram of Doppler echocardiographic mitral valve inflow patterns.
A. Normal pattern. B. Impaired filling pattern. C. Restrictive pattern. AT, acceleration time; DT, deceleration time; IR, isovolumic relaxation; S2, aortic valve closure. (Adapted with permission from Feigenbaum H. Echocardiography, 5th ed. Philadelphia, PA: Lea & Febiger; 1994.)
In older persons, alterations in left ventricular relaxation and compliance result in characteristic changes in the pattern of diastolic filling (Figure 76- 5B). Early filling is impaired, and the upstroke of the E-wave is delayed.
Similarly, the downslope of the E-wave (deceleration time) is less steep. To compensate for increased resistance to emptying, the left atrium enlarges.
This results in a more forceful left atrial contraction and an augmented A- wave and thus, a greater proportion of filling occurs in the latter period of diastole in older individuals. As much as 30% to 40% of left ventricular end-diastolic volume is attributable to atrial contraction. Thus, older
individuals become increasingly reliant on the atrial “kick” to maximize left ventricular filling.
A third pattern of diastolic filling, referred to as the restrictive pattern, occurs when the left ventricle’s ability to fill becomes severely compromised. In this situation (Figure 76-5C), very little flow occurs after the rapid filling phase in early diastole. This pattern is characterized by a tall, narrow E-wave with a rapid downslope, with diastasis achieved early in diastole. Little additional flow occurs during mid-diastole, and the A- wave is typically small, with an amplitude that is less than 50% of the E- wave. A restrictive pattern indicates marked elevation of the left ventricular diastolic pressure, and it tends to be associated with a poor prognosis. A restrictive filling pattern almost always occurs in patients with advanced cardiac disease and is not attributable to aging alone.
Age-related changes in diastolic filling have several important clinical implications. First, reduction in ventricular filling subverts the Frank- Starling mechanism (where increased preload volume results in a higher stroke volume), one of the cardinal adaptive responses (along with sympathetic activation) necessary to acutely increase cardiac output. Second, impaired diastolic filling results in a left-shift of the normal ventricular pressure-volume relationship; consequently, small increases in diastolic volume lead to greater increases in diastolic pressures among older compared to younger individuals. This increase in diastolic pressure is transmitted back to the left atrium. Over time this alters left atrial size and function which in turn, increases the likelihood of atrial ectopic beats and atrial arrhythmias, especially atrial fibrillation. Thus, atrial fibrillation, like heart failure, increases in prevalence with advancing age. Additionally, atrial fibrillation itself is a common precipitant of heart failure in older adults for two reasons. First, the absence of a coordinated atrial contraction substantially compromises left ventricular diastolic filling due to loss of the atrial “kick.” Second, the rapid, irregular ventricular rate associated with acute atrial fibrillation shortens the diastolic filling period, which further attenuates ventricular filling.
A third effect of altered diastolic filling is an increased propensity for older adults to develop HFPEF, formerly called “diastolic heart failure.” Because of the altered left ventricular pressure-volume relation, increases in left ventricular pressure from ischemia, venoconstriction, or multiple other factors can lead to pulmonary congestion and edema. Moreover, individuals
with impaired diastolic function are often “volume sensitive”; that is, small increments in intravascular volume, as may occur with a dietary sodium indiscretion or intravenous fluid administration, result in abrupt rises in intraventricular pressure rises and consequently heart failure symptoms such as shortness of breath and/or exercise intolerance; while intravascular volume contraction, which may arise from poor oral intake or over diuresis, can cause marked falls in stroke volume, cardiac output, and blood pressure.
The fourth major effect of cardiovascular aging is altered myocardial energy metabolism at the level of the mitochondria. Under resting conditions, older cardiac mitochondria can generate enough adenosine triphosphate (ATP) to meet the heart’s energy requirements. However, when stress causes an increase in ATP demands, the mitochondria are often unable to respond appropriately.
Aging is also associated with significant changes in other organ systems, which impact directly or indirectly on the development and/or management of heart failure. Aging is accompanied by a decline in glomerular filtration rate, which impairs regulation of intravascular volume and electrolyte homeostasis (see Chapters 39 and 82). The reduced capacity of the kidneys to respond to intravascular volume overload or dietary sodium excess increases the risk of heart failure in older individuals. In addition, older patients are less responsive to diuretics and more likely to develop diuretic- induced electrolyte abnormalities than younger patients, which complicates the management of heart failure in the older age group.
Aging is also associated with numerous changes in respiratory function, which serve to diminish respiratory reserve (see Chapter 80). Some of these effects, such as ventilation:perfusion mismatching and sleep-related breathing disorders, may contribute directly to the development of heart failure by producing hypoxemia or pulmonary hypertension. Other changes in lung compliance reduce the capacity of the lungs to compensate for the failing heart by increasing tidal volume and minute ventilation, thereby contributing to the patient’s sensation of dyspnea. In more severe cases of cardiac failure, such as pulmonary edema, acute respiratory failure may ensue, because of the inability of the lungs to maintain oxygenation and effective ventilation.
Age-related changes in central nervous system function include an impaired thirst mechanism, which may contribute to dehydration and intravascular volume contraction in patients treated with diuretics, and reduced capacity of the central nervous system’s autoregulatory mechanisms
to maintain cerebral perfusion in the face of changes in systemic arterial blood pressure. Aging is also associated with widespread changes in baroreflex responsiveness. For example, impaired responsiveness of the carotid baroreceptors to acute changes in blood pressure may cause orthostatic hypotension or syncope, and these effects may be further aggravated by many of the drugs used to treat heart failure.
Finally, as is well recognized, aging is associated with significant changes in the pharmacokinetics and pharmacodynamics of almost all drugs. When coupled with polypharmacy, which is nearly universal in adults with heart failure, the risk for adverse drug reactions is significant in older adults with heart failure. Strong consideration for drug-drug, drug-disease, and drug-person interactions thus remain paramount when prescribing medications to older adults with heart failure and should also be taken into consideration when developing pharmacotherapeutic strategies for older heart failure patients (see Chapter 22).
ETIOLOGY AND PRECIPITATING FACTORS
In general, the risk factors for heart failure are similar in older and younger patients (Table 76-3), but the etiology for heart failure in older individuals is more often multifactorial. Hypertension and coronary heart disease are the most common causes of heart failure, accounting for more than 70% of cases. The term cardiomyopathy, which refers to pathologic abnormalities of the heart, is a descriptor frequently preceded by a modifier that indicates a potential cause of heart failure. For example, hypertensive hypertrophic cardiomyopathy represents a more severe form of hypertensive heart disease most commonly seen in older women and often accompanied by calcification of the mitral valve annulus. These patients often manifest severe diastolic dysfunction and may exhibit dynamic left ventricular outflow tract obstruction indistinguishable from that seen in hypertrophic cardiomyopathy due to sarcomere mutations.
TABLE 76-3 ■ COMMON ETIOLOGIES OF HEART FAILURE IN OLDER ADULTS
Valvular cardiomyopathy is an increasingly common cause of heart failure at older age. Calcific aortic stenosis is now the most common form of
valvular heart disease requiring invasive treatment, and aortic valve replacement is the second most common major cardiac procedure performed in patients older than 70 years (after coronary bypass grafting). Mitral regurgitation in older individuals may be caused by myxomatous degeneration of the mitral valve leaflets and chordae tendineae (mitral valve prolapse), mitral annular calcification, valvular vegetations, ischemic papillary muscle dysfunction, or altered ventricular geometry owing to ischemic or nonischemic dilated cardiomyopathy. Importantly, mitral regurgitation may be acute (eg, following acute myocardial infarction), subacute (eg, endocarditis), or chronic (eg, myxomatous degeneration), and the clinical manifestations may vary widely in each of these settings. In the United States, rheumatic mitral stenosis is a less common cause of heart failure in older adults. Functional mitral stenosis owing to severe mitral valve annulus calcification with narrowing of the mitral valve orifice is an uncommon cause of heart failure, but it is associated with a poor prognosis. Aortic insufficiency may be either acute (eg, because of endocarditis or type A aortic dissection) or chronic (eg, annuloaortic ectasia or syphilitic aortitis), but it is a relatively infrequent cause of heart failure in older adults. Finally, prosthetic valve dysfunction should be considered as a potential cause of heart failure in any patient who has undergone previous valve repair or replacement.
In older adults, ischemic cardiomyopathy from one or more prior myocardial infarctions is the most common cause of heart failure.
Nonischemic dilated cardiomyopathy is less common in older than in younger individuals; when present, it is most often either idiopathic or genetic in origin or attributable to chronic ethanol abuse or cancer chemotherapy (eg, anthracyclines or trastuzumab). Stress cardiomyopathy (also known as takotsubo cardiomyopathy) is a cause of acute heart failure usually precipitated by physical or psychological stress. The majority of patients with Takotsubo cardiomyopathy are women. Sarcomeric hypertrophic cardiomyopathy, once thought to be rare in the older age group, has been increasingly recognized in adults older than 65 years. Similarly, restrictive cardiomyopathy, most commonly owing to transthyretin amyloid deposition, is an increasingly recognized cause of HFPEF in older adults.
Clinical and autopsy series have shown an age-dependent prevalence of wild-type transthyretin cardiac amyloidosis (ATTRwt, formerly called senile cardiac amyloidosis), which is diagnosed in adults older than 60 years.
Transthyretin cardiac amyloidosis is either due to the deposition of wild-type transthyretin protein (also known as prealbumin) or the result of a variant in the transthyretin gene (ATTRv), which are present in up to 4% of African- Americans, who are at increased risk for developing cardiac amyloidosis with advancing age. While the genetic defect is present from birth, penetrance is age dependent and clinical manifestations typically do not become apparent until after age 60. Wild-type transthyretin amyloidosis has been found in 13% of older adults hospitalized for HFPEF who have an increased left ventricular wall thickness of more than 12 mm. Novel therapies that inhibit production or promote stabilization of transthyretin amyloidosis, such as tafamidis, have been shown to reduce morbidity and mortality in this disease, though concerns about cost could limit access.
Infective endocarditis is an uncommon but important cause of heart failure in older patients because it is one of the few etiologies for which curative pharmacologic therapy is available. Endocarditis should be strongly suspected in any patient with persistent fever and either a prosthetic heart valve or a preexisting valvular lesion. It should also be considered in any patient with fever, recent dental work or other procedure, and a new or worsening heart murmur. It is important to recognize, however, that the clinical manifestations of endocarditis are often protean, and the absence of fever or a heart murmur does not exclude this diagnosis in older individuals.
Myocarditis is an uncommon cause of heart failure in older adults. It is most commonly infectious (eg, post-viral) but can be noninfectious (eg, owing to sarcoid or collagen vascular disease). Increasing cases are being described in the setting of cancer therapeutics, particularly immune checkpoint inhibitors. Pericardial effusions, for which there are numerous etiologies, occasionally present with heart failure symptomatology, including fatigue, exertional dyspnea, and edema. Constrictive pericarditis may be infectious (eg, tuberculous) or noninfectious (eg, postradiation), but it is a rare cause of heart failure in older patients.
High-output failure is cause of heart failure in older adults, but when present the diagnosis is frequently overlooked. Potential causes of high- output failure include chronic anemia, hyperthyroidism, thiamine deficiency, and arteriovenous shunting (eg, owing to a dialysis fistula or arteriovenous malformations) and morbid obesity.
Finally, in a small percentage of older heart failure patients, detailed investigation may fail to identify any primary cardiovascular pathology. In
cases with a normal left ventricular ejection fraction, heart failure may be attributed to age-related diastolic dysfunction.
Precipitating Factors
In addition to determining the etiology of heart failure, it is important to identify coexisting factors that may have contributed to the acute or subacute exacerbation (Table 76-4). The most common precipitant in patients with preexisting heart failure is nonadherence to medications and/or diet. Indeed, nonadherence may contribute to as many as two-thirds of heart failure exacerbations. Older patients with cognitive impairment, depression, poor mobility, or limited social support may have particular difficulty following complex treatment plans, and this is important to remember when designing heart failure self-care and medication regimens.
TABLE 76-4 ■ COMMON PRECIPITANTS OF HEART FAILURE IN OLDER ADULTS
Among cardiac factors, myocardial ischemia or infarction and new-onset or recurrent atrial fibrillation or flutter are the most common causes of an acute episode of heart failure. Other cardiac causes include ventricular arrhythmias, especially ventricular tachycardia, and bradyarrhythmias, such as marked sinus bradycardia or advanced atrioventricular block. Sick sinus syndrome, which is common in older adults, is a frequent cause of bradyarrhythmias in this population. In hospitalized patients, iatrogenic volume overload is also an important precipitant of heart failure.
As previously discussed, older patients have limited cardiovascular reserve and they are less able to compensate in response to increased demands. As a result, heart failure in older adults can be precipitated by acute or worsening noncardiac conditions. Patients with acute respiratory disorders, such as pneumonia, pulmonary embolism, or an exacerbation of chronic obstructive lung disease, are particularly prone to exhibit deterioration in cardiac function. Other serious infections, such as sepsis or pyelonephritis, may also lead to heart failure exacerbations. In patients with hypertension, inadequate blood pressure control is a common cause of worsening heart failure. Thyroid disease, anemia, and declining renal
function may also contribute directly or indirectly to the development of heart failure. Substance abuse can also worsen heart failure. Alcohol is a cardiac depressant, and it may also precipitate arrhythmias, especially atrial fibrillation.
Finally, numerous drugs and medications may contribute to heart failure exacerbations. The American Heart Association issued a statement in 2016 enumerating medications that can cause or worsen heart failure.
Approximately 30% to 50% of older adults with heart failure take at least one medication that can cause or worsen heart failure, an observation that likely relates to their high prevalence of multimorbidity and polypharmacy. Nonsteroidal anti-inflammatory drugs (NSAIDs) are one of the most commonly used offenders and can worsen heart failure by impairing renal sodium and water excretion and contributing to intravascular volume overload. In addition, NSAIDs antagonize the effects of angiotensin- converting enzyme (ACE) inhibitors, thereby limiting the efficacy of these agents. Corticosteroids and estrogen preparations can also cause fluid retention and an increase in plasma volume. Insulin-sensitizing thiazolidinediones (rosiglitazone and pioglitazone) can also cause fluid retention, and thus worsen heart failure. Cardiovascular medications can also potentially exert negative effects in heart failure. For example, the antihypertensive agent minoxidil also promotes fluid retention, and several other antihypertensive drugs (eg, clonidine) may have unfavorable hemodynamic effects. Even β-blockers (including ophthalmologic agents) and calcium channel blockers, which are widely used in older individuals with cardiovascular disease, when used in excess can exacerbate heart failure since they are negative inotropes. Class Ia (eg, quinidine, procainamide, and disopyramide) and Ic (eg, flecainide and propafenone) antiarrhythmic agents also have important myocardial depressant effects that may worsen cardiac function.
CLINICAL FEATURES
Symptoms
The most common symptoms of heart failure in older adults are exertional shortness of breath, orthopnea, edema, bloating, fatigue, and exercise intolerance. However, atypical symptoms are common in older patients, particularly those older than 80 years (Table 76-5). As a result, heart failure
in older adults is paradoxically both over- and underdiagnosed. Thus, shortness of breath in an older individual may be attributed to heart failure when the underlying cause is chronic lung disease, pneumonia, or anemia. Similarly, fatigue and reduced exercise tolerance may be caused by anemia, hypothyroidism, depression, or deconditioning. On the other hand, sedentary individuals and those limited by arthritis or neuromuscular conditions may not report exertional dyspnea or fatigue, and atypical symptoms such as those listed in Table 76-5 may be the first and only clinical manifestations of heart failure. In such cases, the clinician must maintain a high index of suspicion or the diagnosis of heart failure may be readily overlooked.
TABLE 76-5 ■ ATYPICAL MANIFESTATIONS OF HEART FAILURE IN OLDER PERSONS
Signs
The physical findings in older heart failure patients may be nonspecific or atypical. The classic signs of heart failure include pulmonary rales, an
elevated jugular venous pressure, abdominojugular reflux, an S3 gallop, and pitting edema of the lower extremities. However, rales in older individuals may be due to chronic lung disease, pneumonia, or atelectasis; and peripheral
edema may be caused by venous insufficiency, renal disease, immobility, or
medication (eg, calcium channel blockers). Conversely, older patients may have an unremarkable physical examination despite markedly reduced cardiac performance. Inversely, impaired sensorium or Cheyne-Stokes respirations may be the only findings to suggest the presence of heart failure.
DIAGNOSTIC EVALUATION
Heart failure is difficult to diagnose in older patients with multiple comorbid conditions and either vague or nonspecific symptoms and signs. Thus, clinicians need to perform a careful history and physical examination, giving due consideration to potential alternative etiologies for the patient’s findings. While physical signs may be unreliable in older patients, certain findings, including pulsus alternans, an S3 gallop, and the presence of jugular venous
distension at rest or in response to the abdominojugular reflux maneuver, are highly specific signs of heart failure in older patients. In the absence of these findings, the diagnosis often remains in doubt, and additional laboratory studies are required.
To differentiate shortness of breath attributable to heart failure from other causes, the level of B-type natriuretic peptide (BNP—a 32-amino acid hormone released by the cardiac ventricles in response to increased wall tension) or its inactive fragment N-terminal pro-BNP (NT–pro-BNP) is the single most useful test. However, natriuretic peptide levels increase modestly with age especially in women (Figure 76-6), declining renal function, and worsening anemia; and are generally lower with a higher BMI. Therefore, the specificity of an elevated natriuretic peptide level for identifying heart failure declines with age. BNP levels more than 500 pg/mL in the appropriate clinical context are highly suggestive of active heart failure, whereas a normal value (< 100 pg/mL) in a nonobese older adult makes the diagnosis of heart failure much less likely. In addition to the BNP level, the chest radiograph remains useful for establishing the presence of active pulmonary congestion. In patients with moderate or severe heart failure, the chest film will usually demonstrate typical findings of cardiomegaly, pulmonary vascular redistribution or edema, and pleural
effusions. However, in patients with mild heart failure or coexisting pulmonary disease, the chest radiograph may be nondiagnostic.
FIGURE 76-6. B-type natriuretic peptide levels by age and gender (mean values in healthy volunteers). (Data from Redfield MM, Rodeheffer RJ, Jacobsen SJ, et al. Plasma brain natriuretic peptide concentration: impact of age and gender. J Am Coll Cardiol. 2002;40[5]:976– 982.)
Once heart failure has been diagnosed, the physician must address two crucial questions, the answers to which will serve as the basis for selecting appropriate therapy:
What is the underlying etiology and pathophysiology of heart failure (see Table 76-3)?
What additional factors, if any, contributed to or precipitated the development of heart failure (see Table 76-4)? Often, one or more precipitating factors can be identified, and alleviating these factors may significantly improve symptoms and reduce the likelihood of subsequent heart failure exacerbations.
In 2017, the American College of Cardiology and American Heart
Association Task Force on Practice Guidelines published revised guidelines
for the diagnosis and management of heart failure. Table 76-6 outlines an appropriate initial diagnostic assessment for patients with new-onset heart failure. Class I studies are defined as those that are indicated in most patients, class II procedures are acceptable in some patients but are of unproven efficacy and may be controversial, and class III studies are not routinely indicated and, in some cases, may be harmful. Briefly, basic laboratory studies, a thyroid function test, a chest radiograph, an electrocardiogram, and an echocardiogram with Doppler are recommended in all patients. Cardiac catheterization and coronary angiography are appropriate in patients with angina or significant ischemia on noninvasive testing, and in those who require surgical correction of a valve lesion (eg, aortic stenosis), unless the patient is not a suitable candidate for coronary revascularization.
TABLE 76-6 ■ DIAGNOSTIC EVALUATION OF PATIENTS WITH HEART FAILURE
and lambda free light chaiנ1 and e1·u1מ and u1·ine pr tein 1 ctr ph r j, ith imנnun fi atioת) r plו chr mocytoma.
Tr p nilנ i11 th with u p ct d m c rdial i chemia or זny ca1·diti .
c1· נ1ing for l p-di rder d br a l1ing.
tr t t t valtגat � 1· i ch mia in patient with un - plain d h art failur who r p t ntial candidat � r
r va cu1a1·izatj n.
or nary angi graphy i heart failur in pati nt
r va ulariza i n.
i ch mia may b c ntributing
h a1·e p t ntial aז1didat j; r
ln a i,, he111 dynamic m nitoring can b u ful [4 r car - full Iect d pati 11t ith acu e HF h ha per i nt
mpto111 d pit mpiז·ic adju m nt f t ndard th ra pi and (a) h e fluid tatu , p rfu i 11 or y t 111i r pulm nar va cular r i tan un ertain; (b) h y - t lic pr ur r mai1ו low ciat d ith m
d pit initial th 1·apy· ( ) h al functi n•
ning ith th 1·ap ; (d) h r qui1· pa1· nt ral a activ ag nt ; r ( ) h נ11a n d n id rati n tד r נn chanical irculat r upp 1·t r tran pJ ntati n.
Er1domyocardiaJ biop y hen a p cific dia no j u - pected that ould i1illuenc th rapy.
i
T, clוn tium-99 can if amyloido la III (not routiנו ly i11dicated)
Routine r peat 1nea u1· ment of 1 ft v ntricula1· functi n iח
stabl pati nt
ndomyocardiaJ biop y a a routine proc dure iת the evalu
itlו lוea1·t failuנ·e
Datn fi· זז1 Yaוו y \ , Je.�zוp . Bozk11rl B el al. 2017 A
pdate of r/r 201 A CF/ AHA uideli11e for· tlre a11n cון1 111 of He11rl Fai/11re: A Report of tJz A111ericn11 oll יge סf nrdiologyl A1זוerica11 H •11rt sso iatiס11 Task For,·e 011 li11i al Pra ti .111id li11c a11d tlו H nrt l-i1i/11r ס iet;1 of A,11 ri n. iת 11/1וtiס11 2017;1 (6):el 7- 161.
The recommendations outlined in Table 76-6 are targeted toward a broad range of adult heart failure patients, and most are applicable even in patients at an advanced age. Nonetheless, in older patients it is appropriate to consider the potential risks and benefits of each diagnostic procedure on an individualized basis, considering comorbid conditions, the extent of cardiac and noncardiac disability, and the patient’s goals of care. For example, in a frail 85-year-old individual with severe diabetic nephropathy, the risk of precipitating dialysis-dependent end-stage renal disease as a complication of coronary angiography must be carefully weighed against the potential benefits to be derived from a successful revascularization procedure.
Similarly, patient autonomy must be respected in all cases, and it is inappropriate to exert pressure on an older patient to undergo a procedure that the patient clearly does not desire. In this regard, it is imperative to discuss the therapeutic implications of specific procedures (especially invasive procedures) with respect to the patient’s subsequent care (eg, need for coronary bypass surgery) prior to performing the diagnostic assessment.
Heart Failure With Reduced Versus Heart Failure With Preserved Ejection Fraction
Current nomenclature distinguishes two forms of heart failure—HFREF, usually defined as ejection fraction less than 40% to 50% and HFPEF. The clinical manifestations of both forms of heart failure are similar. No single clinical feature can reliably distinguish patients with HFREF from those with HFPEF, although certain features tend to favor one form or the other (Table 76-7). The predictive accuracy of algorithms to predict HFREF versus HFPEF is modest, and additional testing is essential in order to reliably differentiate HFREF from HFPEF.
TABLE 76-7 ■ CLINICAL FEATURES OF HEART FAILURE WITH REDUCED VERSUS PRESERVED EJECTION FRACTION
An important goal of the diagnostic evaluation is differentiating HFREF from HFPEF since the management of these two syndromes differs. As noted, it is difficult to make this distinction on clinical grounds alone, and it is therefore essential to evaluate left ventricular function directly by echocardiography, radionuclide angiography (commonly called a MUGA [multiple-gated acquisition] scan), magnetic resonance imaging, or contrast ventriculography. In general, transthoracic echocardiography is the most useful technique because it is noninvasive, widely available, and, in addition to providing information about systolic and diastolic function, it is helpful in evaluating chamber size, wall thickness and motion, valve function,
pulmonary artery pressure, and pericardial disease. Thus, transthoracic echocardiography is appropriate in virtually all older patients with newly diagnosed heart failure and in those with an unexplained change in symptom severity. The principal limitation of echocardiography is that adequate visualization of the heart may be unobtainable in a small percentage of patients, although the availability of echo-contrast agents has reduced this problem. Alternatively, radionuclide angiography can provide an accurate assessment of left ventricular function, as well as information about cavity size and regurgitant valvular lesions. Magnetic resonance imaging provides more detail about myocardial characteristics (scar, inflammation, edema) than echocardiography but cannot assess diastolic function easily, is more expensive and less widely available, and some types of contrast administration are contraindicated in patients with impaired renal function.
Based on the results of echocardiography, radionuclide angiography, magnetic resonance imaging, or contrast ventriculography, heart failure may be classified as HFREF or HFREF (in the ensuing discussion HFREF is defined as ejection fraction < 45%, HFPEF is defined as ejection fraction ≥ 45%). However, it must be emphasized that systolic and diastolic dysfunction are not mutually exclusive. Indeed, almost all patients with significant systolic dysfunction also have concomitant echo Doppler evidence of diastolic dysfunction. Conversely, systolic dysfunction may play a role in the development of heart failure even when the ejection fraction under resting conditions is normal or near normal. Despite these limitations, the classification of heart failure as HFREF or HFPEF is useful in guiding therapy.
MANAGEMENT
The primary goals of heart failure therapy are to improve quality of life, reduce the frequency of heart failure exacerbations, maximize independence and exercise capacity, enhance emotional well-being, and extend survival.
To achieve these goals, optimal therapy in older patients comprises three principal components: correction of the underlying etiology whenever possible (eg, aortic valve replacement for severe aortic stenosis or coronary revascularization for severe ischemia), attention to the nonpharmacologic and rehabilitative aspects of treatment, and the judicious use of medications and device-based therapies.
As will be discussed in the section on Prognosis, the outlook for patients with established heart failure is poor. Therefore, the importance of effectively treating the primary etiology and all comorbid conditions predisposing to heart failure cannot be overemphasized. Since coronary heart disease and hypertension are the most common causes of heart failure in older adults, primary and secondary prevention of these conditions are critical if the development of heart failure is to be forestalled. Indeed, it has now been shown in multiple clinical trials that effective treatment of hypertension can reduce the incidence of heart failure by more than 50%.
Similarly, appropriate management of other coronary risk factors, particularly hyperlipidemia, sedentary lifestyle, and cigarette smoking, will undoubtedly further reduce the burden of heart failure through the primary prevention of coronary heart disease.
Nonpharmacologic Therapy
Despite recent advances in the pharmacotherapy of heart failure, recurrent heart failure exacerbations are common and are more often precipitated by behavioral and social factors than by either new cardiac events (eg, ischemia or an arrhythmia) or progressive deterioration in ventricular function. In one study, lack of adherence to prescribed medications and/or diet contributed to 64% of heart failure exacerbations, while emotional and environmental factors contributed to 26% of hospital readmissions. In another study involving 140 patients 70 years or older hospitalized with heart failure, 47% were readmitted at least once during a 90-day follow-up period. Behavioral and social factors contributing to readmission included medication and dietary nonadherence (15% and 18%, respectively), inadequate social support (21%), inadequate discharge planning (15%), inadequate follow-up (20%), and failure of the patient to seek medical attention promptly when symptoms recurred (20%). These findings suggest that interventions directed at behavioral and social factors could potentially reduce readmissions and improve quality of life in patients with heart failure, and this hypothesis has now been confirmed in numerous prospective randomized trials. In a meta- analytic review of 33 such trials, heart failure readmissions were reduced by 42%, all-cause readmissions were reduced by 24%, and mortality was reduced by 20% in patients with heart failure enrolled in a disease management program relative to conventional care.
Components of a comprehensive nonpharmacologic treatment program are listed in Table 76-8. As with other aspects of geriatric care, it is important to structure the treatment program to accommodate the needs of each individual patient. Not every patient will require all the components listed in the table. Similarly, the optimal intensity of any component, for example, patient education or follow-up care, will vary substantially. For these reasons, it is desirable to designate a single provider to coordinate all aspects of the patient’s care.
TABLE 76-8 ■ NONPHARMACOLOGIC ASPECTS OF HEART FAILURE MANAGEMENT
By virtue of age and their preexisting cardiopulmonary syndrome, older adults with heart failure are particularly vulnerable to the adverse effects of pneumonia and respiratory viruses like influenza and SARS-CoV-2.
Vaccinations have been shown to be effective and safe in older adults with heart failure and are thus recommended to prevent morbidity and mortality related to these conditions. Specifically, influenza vaccination is associated with reduced risk of death in patients with heart failure, and pneumonia vaccines are also recommended by current guidelines.
Nutrition and Diet
Dietary guidance for patients with heart failure has classically emphasized limiting sodium intake. This recommendation is based on the observation that
patients with heart failure who consume excess sodium can retain fluid volume due to increased neurohormonal activation and renal sodium reabsorption. Due to increased consumption of restaurant and prepackaged foods sodium restriction is often difficult to implement in practice.
Moreover, guideline recommendations for total daily sodium limit range from 1500 to 3000 mg/d and are based primarily on expert consensus rather than clinical trial data. In addition to uncertainty about appropriate targets, dietary sodium restriction can have potential harms in older patients with heart failure. First, aggressive reduction of sodium intake can lead to hypovolemia, orthostasis, decreased renal perfusion, and further activation of the neurohormonal axis. These issues can be mitigated through close clinical follow-up and adjustment of diuretics and other medications.
Less often appreciated is the relationship between sodium restriction and poor nutritional status. Malnutrition is a strong risk factor for death and hospitalization in heart failure, particularly in older and/or frail individuals. Dietary sodium restriction has been associated with insufficient calorie intake and dietary deficiencies of critical micronutrients, both of which in turn predict poor clinical outcomes. Older patients with heart failure face a myriad of challenges in maintaining adequate nutrition, including age-related changes in taste and smell, symptoms such as shortness of breath, fatigue, bloating, and nausea, and psychological and logistical factors such as depression, cognitive impairment, poor mobility, and limited social support.
Despite these issues, few dietary intervention clinical trials have been completed in patients with heart failure. The SODIUM-HF pilot study (38 patients, mean age 65) demonstrated that individualized counseling with a dietitian could achieve dietary sodium restriction without compromising overall nutrient intake. The ongoing multinational SODIUM-HF trial (recruiting 1000 patients with stable heart failure) will clarify whether dietitian-guided aggressive sodium restriction (< 1500 mg/d) improves survival free of death or heart failure hospitalization versus usual care. The Spanish PICNIC trial (Nutritional Intervention Program in Hospitalized Patients with Heart Failure) studied intensive, highly individualized monthly dietary counseling in malnourished patients (mean age 79) who survived heart failure hospitalization. In 120 participants, the 6-month nutritional intervention markedly reduced death or heart failure hospitalization at 1-year post-discharge (27% vs 61%, p < 0.001).
The optimal dietary recommendations for older patients with heart failure have not been determined. The Dietary Approaches to Stop Hypertension (DASH) eating pattern and, to slightly lesser extent, the Mediterranean diet have been associated with lower long-term mortality in postmenopausal women with heart failure. Both options are reasonable, although guidance may need to be modified by food preference, economic concerns, or comorbidities (eg, potassium content of the DASH diet in the setting of chronic kidney disease). While obesity is a strong risk factor for heart failure (particularly HFPEF) and associated comorbidities, weight loss has been associated with poor outcomes in multiple heart failure cohorts. Weight loss is thus a controversial topic, and advice may need to be modified based on the presence of frailty or sarcopenia.
Physical Activity and Exercise
Historically, patients with heart failure were advised to restrict physical activity on the basis that exercise could potentially worsen cardiac function or precipitate arrhythmias. However, it is now recognized that excessive limitation of physical activity progressively worsens functional capacity because of deconditioning. In addition, several studies have demonstrated that participation in an appropriately structured exercise program may significantly improve functional capacity and quality of life in patients with heart failure. In the largest of these trials, HF-ACTION, 2331 patients with stable heart failure and an ejection fraction less than or equal to 35% were randomized to a supervised exercise program or usual care. The mean age was 59 (25% were ≥ 68 years) and 28% were women. After a median follow-up of 30 months, patients randomized to the exercise intervention experienced a 7% reduction in the primary end point of all-cause mortality or all-cause hospitalization, but the difference was not significant (p = 0.13).
After adjusting for highly prognostic baseline characteristics, exercise was associated with an 11% reduction in the primary end point (p = 0.03). In addition, exercise was associated with improved health status beginning at 3 months and persisting for up to 4 years. Based on these findings, current guidelines recommend regular exercise for most patients with heart failure. In addition, in 2014, the Centers for Medicare and Medicaid Services approved structured cardiac rehabilitation and exercise training for HFREF patients like those enrolled in the HF-ACTION trial.
While data on exercise training in older adults are limited, a randomized trial involving 200 patients 60 to 89 years (mean 72 years, 66% male) with New York Heart Association (NYHA) class II to III HFREF evaluated the effects of exercise prescription, education, occupational therapy, and psychosocial counseling. At 24 weeks of follow-up, intervention group patients experienced significant improvements in NYHA class, 6-minute walk distance, and quality of life, whereas control-group patients demonstrated no change from baseline in any of these parameters. Patients receiving the intervention also had significantly fewer hospital admissions relative to the control group. In addition, three small randomized trials in older patients with HFPEF have demonstrated that exercise training is safe and results in improved exercise capacity and quality of life. These data provide support for a beneficial effect of exercise and cardiac rehabilitation in older patients with either HFREF or HFPEF. Nonetheless, additional studies focused on traditional or remotely delivered therapy are needed to evaluate the safety and efficacy of regular exercise in older heart failure patients, especially those older than 75 years, with frailty or multiple comorbid conditions, and/or who have been recently hospitalized.
Exercise prescription A comprehensive exercise and conditioning program is appropriate for most older patients with mild-to-moderate heart failure symptoms and no other contraindications to exercise. Table 76-9 enumerates specific contraindications that should be considered. Table 76-10 outlines the basic components of such a program. In general, patients should try to exercise every day. A typical session should include some gentle stretching exercises as well as strengthening exercises using elastic bands or light weights and targeting all the major muscle groups. Suitable forms of aerobic exercise for older patients include walking, stationary cycling, and swimming. The choice of aerobic exercise should be tailored to the patient’s wishes and abilities. When initiating an exercise program, the duration and intensity of the aerobic activity should be well within the patient’s comfort range. The activity should be enjoyable, not stressful, and after completing the activity the patient should feel “positive” about the experience and not unduly fatigued. For many older patients with heart failure, this may mean starting with as little as 2 to 5 minutes of slow-paced walking. Once the patient feels comfortable exercising, the duration of exercise can be gradually increased over a period of several weeks. Weekly increases of 1 to 2 minutes per session are appropriate for most patients. Once the patient can
exercise continuously and comfortably for 20 to 30 minutes, the intensity of exercise may be increased, if desired. More recently, high-intensity interval training, in which short bursts of higher-intensity exercise are incorporated into the exercise regimen, has been shown to be safe and to result in more rapid increases in exercise capacity in heart failure patients. These findings, while encouraging, should be regarded as preliminary, and high-intensity training should only be initiated in a monitored setting.
TABLE 76-9 ■ CONTRAINDICATIONS TO EXERCISE IN OLDER PATIENTS
TABLE 76-10 ■ EXERCISE PRESCRIPTION FOR OLDER PATIENTS WITH HEART FAILURE
The two most common techniques for monitoring exercise intensity are the target heart rate method and the patient’s subjective assessment of perceived exertion. For patients not taking medications that lower heart rate (eg, β-blockers), the maximum attainable heart rate in beats/min can be estimated from the formula 208 – 0.7 × age). The patient’s resting heart rate is then subtracted from this figure to determine the heart rate reserve. A suitable target heart rate for low-intensity exercise can be calculated as the resting heart rate plus 30% to 50% of the heart rate reserve. For moderate- intensity exercise, the target range is the resting heart rate plus 50% to 70% of the heart rate reserve.
For many older patients, calculating the target heart rate may be difficult. In addition, it may not be possible to accurately determine heart rate during exercise (unless a heart rate monitor is used). For these reasons, the patient’s subjective assessment of perceived exertion is often the most practical method for monitoring exercise intensity. In addition, perceived exertion correlates reasonably well with exercise heart rate. A simple perceived exertion scale (Borg Scale) comprises five levels: very light, light, moderate, somewhat heavy, and heavy. Older patients with heart failure should begin with very light exercise, progressing to the light range as tolerated. After several weeks, some patients may wish to increase their perceived exertion level into the moderate range, but more strenuous exercise is not recommended for patients with heart failure.
Pharmacologic Treatment of Heart Failure With Reduced Ejection Fraction In general, the treatment of HFREF in older patients does not differ substantially from that in younger patients. The primary goal of pharmacotherapy for HFREF is to reduce mortality and prevent events such as HF hospitalizations. Many patients who receive aggressive therapy can experience a substantial improvement in their ejection fraction. Relatedly, many of these agents can subsequently improve symptoms and improve quality of life. However, similar to any other medication prescribed to older adults, it is naturally important to weigh the risks and potential benefits in both the short and long term in conjunction with other comorbid conditions, geriatric syndromes like frailty and cognitive impairment, overall life expectancy, and health priorities.
β-Blocke rs As recently as 20 years ago, β-adrenergic blocking agents were considered contraindicated in patients with heart failure owing to their negative inotropic and chronotropic effects, both of which can diminish cardiac output. However, it is now recognized that persistent activation of the sympathetic nervous system is detrimental in patients with heart failure because it exacerbates ischemia, causes arrhythmogenesis, promotes β- receptor desensitization, and contributes to a progressive decline in ventricular function. Furthermore, several large prospective randomized clinical trials have now confirmed that long-term β-blockade improves left ventricular function and reduces both total mortality and sudden cardiac death in a broad spectrum of patients with HFREF.
In the Study of the Effects of Nebivolol Intervention on Outcomes and Rehospitalization in Seniors with Heart Failure trial (SENIORS), 2128 patients 70 years or older (mean age 76, 37% women) were randomized to nebivolol or placebo. During a mean follow-up of 21 months, the primary composite outcome of death or cardiovascular hospitalization was significantly lower in patients randomized to nebivolol, with similar results in younger and older patients, including those older than 85 years. Based on these studies, β-blockers are now recommended as a standard therapy in almost all patients with symptomatic HFREF in the absence of contraindications.
In the United States, carvedilol, bisoprolol, and metoprolol succinate have been approved for the treatment of heart failure. Among the many β- blockers on the market, it is worth noting that these are the only three drugs that were studied and subsequently demonstrated benefit in improving
outcomes in heart failure. Therefore, these are the β-blockers that should be used for the purposes of treating heart failure. Starting dosages are carvedilol
3.125 to 6.25 mg BID, metoprolol tartrate 6.25 mg BID or QID (or metoprolol succinate 12.5–25 mg daily), and bisoprolol 1.25 to 2.5 mg daily. Of note, although metoprolol succinate (long-acting) is the evidence-based formulation of metoprolol for HFREF, metoprolol tartrate (short-acting) may be a reasonable alternative when titrating to target doses. Doses should be gradually increased at approximately 2-week intervals as tolerated to achieve maintenance dosages of carvedilol 25 to 50 mg BID, metoprolol succinate 100 to 200 mg daily, and bisoprolol 10 mg daily.
Contraindications to the use of β-blockers include severe decompensated heart failure, significant bronchospastic lung disease, marked bradycardia (resting heart rate < 50/min), systolic blood pressure less than 90 to 100 mm Hg, advanced heart block (> first degree), and known intolerance to β- blockade. It is important to monitor heart rate, blood pressure, clinical symptoms, and the cardiorespiratory examination during initiation and titration of therapy. Patients should be advised that they may experience a modest worsening in heart failure symptoms especially fatigue during the first few weeks of β-blocker therapy, but that in most cases these symptoms resolve, and the long-term tolerability of β-blockers is excellent. However, if severe adverse effects occur, dosage reduction or discontinuation of treatment may be necessary. Notably, hemodynamic intolerance to β-blocker may be suggestive of an advanced stage of heart failure and thus may warrant evaluation by a specialist if not already involved in the care of the patient.
ACE inhibitors Numerous prospective randomized clinical trials using multiple different angiotensin-converting enzyme (ACE) inhibitors in a variety of clinical settings have conclusively demonstrated that these agents significantly reduce mortality and hospitalization rates and improve exercise tolerance and quality of life in patients with impaired left ventricular systolic function, even in the absence of clinical heart failure. Although none of these studies included patients older than 80 years, available evidence indicates that ACE inhibitors are as effective in older patients as in younger ones.
In older patients, therapy should be initiated with a low dose (eg, captopril 6.25–12.5 mg TID or enalapril 2.5–5 mg BID), and the dose should be gradually increased as tolerated. In hospitalized patients who are hemodynamically stable, the dose may be increased daily; in outpatients, the dose should be increased weekly or biweekly. Throughout the titration
period, blood pressure, renal function, and serum potassium levels should be monitored.
For maintenance therapy, ACE inhibitor dosages should be commensurate with those used in the clinical trials. Recommended “target” doses for selected ACE inhibitors are as follows: captopril 50 mg TID, enalapril 10 to 20 mg BID, lisinopril 20 to 40 mg daily, ramipril 10 mg daily, trandolapril 4 mg daily, quinapril 40 mg BID, and fosinopril 40 mg daily. In patients unable to tolerate full therapeutic doses of ACE inhibitors, lower doses may be used; however, the clinical benefits may be attenuated with lower dosages.
Clearly, the risks and benefits of higher doses must be weighed for each individual patient. Captopril and enalapril are excellent agents for use during the titration phase given its short half-life—but once the maintenance dose has been reached, it is desirable to change to a once-daily ACE inhibitor at equivalent dosage for reasons of increased convenience, potentially improved adherence, and lower cost.
The most common side effect from ACE inhibitors is a dry, hacking cough, which may be severe enough to require discontinuation of therapy in 5% to 10% of patients during long-term use. Less common but more serious side effects include hypotension, a decline in renal function, and hyperkalemia. These side effects tend to occur shortly after initiation of therapy and may be aggravated by intravascular volume contraction as a result of over diuresis. Indications for downward titration or discontinuation of an ACE inhibitor include symptomatic hypotension, persistent increase in serum creatinine of 1 mg/dL or greater, or a rise in the serum potassium level above 5.5 mEq/L. Note that asymptomatic low blood pressure does not necessarily mandate dosage reduction; but again, the risks and benefits must be weighed for each individual patient.
Angiotensin II receptor blockers The use of ARBs for the treatment of heart failure has been evaluated in several studies. In the second Evaluation of Losartan in the Elderly trial (ELITE-II), losartan 50 mg once daily was compared to captopril 50 mg TID in 3152 patients 60 years or older (mean age 71) with moderate heart failure and an ejection fraction of 40% or less, showing similar improvements in mortality. In the Candesartan in Heart Failure: Assessment of Reduction in Mortality and Morbidity (CHARM)— Alternative study, 2028 patients intolerant to ACE inhibitors were randomized to candesartan or placebo and followed for a median of 34 months. Compared to patients in the placebo group, patients randomized to
candesartan experienced a significant 30% reduction in the composite end point of cardiovascular death or hospitalization for heart failure. All-cause mortality was reduced by 17%, which was of borderline statistical significance. The mean age of patients in the CHARM-Alternative study was approximately 66.5, and nearly one-fourth of patients were 75 years or older; however, subgroup analysis by age has not been reported. Accordingly, ARBs are approved for the treatment of heart failure with reduced ejection fraction. The recommended starting dose of valsartan is 20 to 40 mg BID, and the dose should be titrated to 160 mg BID as tolerated; the starting dose of candesartan is 4 to 8 mg once daily, with titration to 32 mg daily as tolerated; and the starting dose of losartan is 25 to 50 mg once daily, with titration up to 150 mg once daily as tolerated. For older adults, especially where significant concern for adverse effects, it may be reasonable to start at even lower doses (cutting the lowest dose pill in half) and slowly titrating based on tolerance. As with many drugs, especially among older adults, the adage of starting low and going slow applies.
It is important to note that combining ACEI and ARB is not recommended due to the adverse effects of using both agents concurrently. In the Valsartan in Acute Myocardial Infarction trial, 14,703 patients with heart failure and/or an ejection fraction less than 35% within 10 days of experiencing an acute myocardial infarction were randomized to receive valsartan, captopril, or both drugs. During a median follow-up of 25 months, there were no differences between groups with respect to all-cause mortality or the composite end point of fatal or nonfatal cardiovascular events. However, limiting side effects were more common in patients receiving both ACE and ARBs than in those receiving either drug alone. The median age was 65 years, and results were similar in older and younger patients.
The major side effects of ARBs are similar to ACEI and include hypotension, renal insufficiency, and hyperkalemia. Notably, ARBs bind directly to angiotensin II receptors on the cell membrane; thus, unlike ACE inhibitors, ARBs do not inhibit the breakdown of bradykinins, which eliminates the bradykinin-mediated side effects such as cough.
Angiotensin receptor neprilysin inhibitor (ARNi) Sacubitril inhibits a neprilysin, a neutral endopeptidase that degrades vasoactive peptides such as BNP thereby promoting the effects of natriuretic peptides. The combination of an angiotensin receptor antagonist (Valsartan) with a neutral endopeptidase inhibitor (sacubitril) has been shown superior to therapy with an ACE
inhibitor (enalapril) reducing the composite endpoint of cardiovascular death or HF hospitalization significantly by 20% in the Prospective Comparison of ARNI with ACEI to Determine Impact on Global Mortality and Morbidity in Heart Failure (PARADIGM-HF) trial. The benefit was seen to a similar extent for both death and heart failure hospitalization and was consistent across subgroups including those <75 and >75 years. ARNi therapy increases the risk of hypotension and renal insufficiency and may lead to angioedema, but the risk of an elevation in creatinine and potassium were lower with ARNi therapy compared to ACEI therapy. Transition from ACE or ARB to ARNi therapy is recommended for all patients with HFREF with at least a 36-hour wash out from ACE Inhibitors to mitigate against angioedema. ARNi therapy is also recommended as first-line therapy for stable HFREF patients and after an acute decompensation based on the results of the PIONEER trial, which specifically studied the safety and short- term efficacy of in-hospital initiation of ARNi. Since systemic hypotension is common with ARNi therapy, caution is warranted among older adults with a low system blood pressure. Unless patients are transitioned to ARNi from high-dose ACE or ARBs, ARNi therapy should be initiated at low dosages (eg, 24/26 mg PO BID of sacubitril/valsartan) with uptitration over time.
Mineralocorticoid receptor antagonists (MRA) The MRAs spironolactone and eplerenone are relatively weak diuretics that are potassium-sparing and interfere with the effect of aldosterone. In the Randomized Aldactone Evaluation of Survival trial, spironolactone 12.5 to 50 mg once daily reduced mortality by 30% and heart failure hospitalizations by 35% in patients with NYHA class III or IV heart failure and a left ventricular ejection fraction less than or equal to 35%, when added to baseline therapy with an ACE inhibitor, digoxin, and loop diuretic. Moreover, the beneficial effects of spironolactone were at least as great in older as in younger patients. In the Eplerenone Post-Acute Myocardial Infarction Heart Failure Efficacy and Survival study, eplerenone 25 to 50 mg once daily significantly reduced mortality by 15% over a mean follow-up period of 16 months in patients with clinical evidence for heart failure and an ejection fraction of 40% or less within 3 to 16 days following acute myocardial infarction.
Sudden death from cardiac causes and cardiovascular hospitalizations were also reduced in the eplerenone group. Compared to placebo, hyperkalemia occurred more commonly but hypokalemia occurred less frequently with eplerenone. The average age of patients in the Eplerenone Post-Acute
Myocardial Infarction Heart Failure Efficacy and Survival study was 64, and although the relative benefit of eplerenone was somewhat less in older compared to younger patients, the difference was not statistically significant.
The EMPHASIS-HF trial randomized 2737 patients with NYHA class II heart failure and a left ventricular ejection fraction less than or equal to 35% to eplerenone at a dose of up to 50 mg or matching placebo. The mean age was 69 and 24% of patients were 75 years or older. The primary outcome was death from cardiovascular causes or hospitalization for heart failure.
The study was stopped prematurely after a median follow-up of 21 months because eplerenone showed a marked reduction in the primary end point relative to placebo (18.3% vs 25.9%, hazard ratio 0.63, p < 0.001). Results were similar in patients over or under age 75. Eplerenone also reduced all- cause mortality, all-cause hospitalizations, and heart failure hospitalizations (hazard ratios 0.76, 0.77, and 0.58, respectively).
Based on the results of these studies, MRAs are recommended in patients with NYHA class II to IV heart failure symptoms and left ventricular ejection fraction less than or equal to 35%, and in patients with heart failure and an ejection fraction of 40% or less following myocardial infarction. These agents are contraindicated in patients with significant renal dysfunction (creatinine ≥ 2.5 mg/dL) or preexisting hyperkalemia. Older patients are at increased risk of adverse effects—accordingly, renal function and serum potassium levels should be monitored closely during initiation and titration of therapy (such as within 3–14 days of initiation or of increasing the dose). In addition, up to 10% of patients receiving long-term treatment with spironolactone may experience painful gynecomastia requiring discontinuation of the drug; this side effect occurs rarely with eplerenone.
Sodium-glucose cotransporte r-2 inhibitors (SGLT2 inhibitors) Sodium-glucose cotransporter-2 (SGLT-2) inhibitors are agents that were developed initially to treat hyperglycemia. SGLT2 is the primary transport protein in the kidney that promotes reabsorption of glucose back into circulation after glomerular filtration. SGLT-2 is in the proximal tubule of the kidney and is responsible for approximately 90% of glucose reabsorption. Large cardiovascular outcome trials in patients with type 2 diabetes demonstrated that SGLT2 inhibitors improve cardiovascular and renal outcomes and reduce the risk of hospitalization for heart failure. Two subsequent randomized clinical trials of SGLT2 inhibitors in patients with existing HFREF (Study to Evaluate the Effect of Dapagliflozin on the Incidence of Worsening Heart Failure or
Cardiovascular Death in Patients With Chronic Heart Failure [DAPA-HF] and Empagliflozin Outcome Trial in Patients With Chronic Heart Failure With Reduced Ejection Fraction [EMPEROR-Reduced]) confirmed that these agents reduce the risk of death and heart failure hospitalizations.
Additionally, they reduce the risk of renal events defined as 50% or greater sustained decline in estimated glomerular filtration rate (eGFR), end-stage renal disease (ESRD), or death due to renal disease. The efficacy of SGLT-2 inhibitors appears similar in those older than 75 years compared with younger individuals, and, surprisingly, regardless of whether diabetes mellitus is present. The most common side effects of SGLT-2 inhibitors include genital yeast infections in men and women, urinary tract infections (UTIs), urinary frequency, and renal dysfunction. These adverse outcomes were similar in younger and older individuals. Rare but more severe side effects of diabetic ketoacidosis and amputation are more common with these agents in older adults.
Hydralazine/nitrates In patients who are unable to tolerate an ACE inhibitor or ARB, the combination of hydralazine with oral or topical nitrates provides an acceptable alternative. The African American Heart Failure Trial (A- HeFT) randomized 1050 Black patients with NYHA class III or IV heart failure to a fixed-dose combination of isosorbide dinitrate plus hydralazine or to placebo in addition to standard heart failure therapy. The study was stopped after an average follow-up of 10 months because of a significantly lower mortality rate in patients randomized to the intervention. Heart failure hospitalizations were also reduced, and quality of life was improved in patients randomized to hydralazine-nitrates relative to placebo. Based on the results of A-HeFT, the fixed-dose combination of isosorbide dinitrate and hydralazine has been approved for treatment of heart failure in Black patients in the United States. Although there was no upper-age restriction for the A- HeFT study, the average age of patients enrolled in the trial was 57, so the efficacy of this therapy in older Black patients remains unknown.
For older patients, treatment should begin with lower dosages (eg, hydralazine 12.5–25 mg TID–QID; isosorbide dinitrate 10 mg TID–QID), followed by gradual upward titration to achieve the doses used in the trials. The most common side effects associated with hydralazine/nitrates include headache and dizziness. A small percentage of patients developed arthralgias or other symptoms suggestive of hydralazine-induced lupus. The requirement for multiple doses over the course of the day, which may be inconvenient and
contribute to increased pill burden and reduced overall nonadherence, should also be considered when prescribing to older adults.
Diure tics Diuretics are the most effective agents for relieving pulmonary congestion and edema, and for this reason they remain a key component of heart failure management. Although there is no data to suggest a mortality benefit, they are effective in reducing symptoms and improving many of the classic symptoms of heart failure including edema and dyspnea.
In patients with mild chronic heart failure, a thiazide diuretic may be sufficient for relieving congestive symptoms and maintaining fluid homeostasis. However, most patients will require a more potent agent, and the “loop” diuretics, including furosemide, bumetanide, and torsemide, are the drugs most widely used. For optimal effectiveness, patients should be instructed to avoid excessive sodium and fluid intake. Typical daily doses of “loop” diuretics range from 20 to 160 mg for furosemide, 0.5 to 5 mg for bumetanide, and 5 to 100 mg for torsemide. In patients hospitalized with an acute episode of heart failure, intravenous administration may be more effective than the oral route in promoting diuresis, in part due to bowel wall edema, which may decrease the drug’s absorption. Patients who fail to respond adequately to a loop diuretic that has low bioavailability (eg, furosemide) may respond to a bioavailable loop diuretic (bumetanide or torsemide) or the addition of metolazone 2.5 to 10 mg daily.
The most common and important side effects of diuretics are electrolyte disturbances, including hypokalemia, hyponatremia, hypomagnesemia, and increased bicarbonate levels indicative of metabolic alkalosis. Owing to age-related changes in renal function as well as a higher prevalence of comorbid illnesses such as diabetes, older patients are at increased risk of serious diuretic-induced electrolyte abnormalities. For this reason, electrolytes should be monitored closely when diuretic therapy is being adjusted. This is particularly true when using metolazone, which can lead to a brisk diuretic response and cause life-threatening hyponatremia and hypokalemia even after relatively short-term use.
The relationship between diuretics and serum creatinine is complex. Patients with volume overload may have hemodilution, whereby serum creatinine will appear low and underestimate the extent of chronic kidney disease present. In such a situation, diuresis may be accompanied by increases in creatinine with the perception that the diuretic has worsened renal function. In older adults with heart failure, contributors to chronic
kidney disease include other cardiovascular risk factors like hypertension and diabetes, as well as the chronic insult of heart failure which can adversely affect kidney perfusion through impaired cardiac output and/or chronically elevated filling pressures. Thus, by achieving and maintaining euvolemia with subsequent optimization of cardiac output, diuretics can theoretically mitigate heart failure-related kidney damage. Avoiding diuretic titration and instead opting for persistent congestion can occur at the expense of symptoms, worse quality of life, and/or increased risk for hospitalization. Thus, it may be reasonable to engage in shared decision-making regarding therapeutic options and inform patients of a possible increase in creatinine resulting from the resolution of hemodilution, rather than from diuretic- related injury. It is similarly important to counsel patients about symptoms of hypovolemia such as dizziness and lightheadedness as an indicator of over- diuresis, which can subsequently worsen cardiac output and cause kidney injury.
Digoxin Digoxin inhibits the sodium-potassium exchange pump located within the myocyte membrane, producing a rise in intracellular sodium concentration. This facilitates sodium-calcium exchange, leading to an increase in intracellular calcium. Calcium binds with troponin C, which initiates the process of contraction by allowing myosin to bind with actin. By increasing calcium availability, digoxin induces a modest increase in the force of myocardial contraction (positive inotropic effect). This effect occurs whether or not heart failure is present, and it does not appear to be affected by age.
The Digitalis Investigation Group (DIG) reported the results of a prospective randomized trial involving 6800 patients with HFREF. Patients were randomized to receive digoxin or placebo in addition to diuretics and an ACE inhibitor, and the average duration of follow-up was 37 months.
Overall mortality did not differ between digoxin and placebo (34.8% vs 35.1%), but there were 28% fewer hospitalizations for heart failure in the digoxin group, and the combined end point of death or hospitalization for heart failure was significantly reduced. In addition, the beneficial effects of digoxin were similar in younger and older patients, including octogenarians. Subsequent analyses based on data from the DIG trial suggest that digoxin administered at low dosages to achieve serum concentrations in the range of
0.5 to 0.9 ng/mL may be associated with improved survival as well as a reduction in all-cause hospitalizations. These findings confirm that digoxin is
beneficial in controlling heart failure symptoms and support the use of low- dose digoxin in patients who remain symptomatic despite appropriate dosages of an ACE inhibitor, β-blocker, MRA, and diuretic.
Side effects from digoxin include cardiac, neurologic, and gastrointestinal effects. In the DIG study, side effects that occurred more frequently in patients receiving digoxin included nausea and vomiting, diarrhea, visual disturbances, supraventricular and ventricular arrhythmias, and advanced atrioventricular heart block. Although not reported in the DIG trial, older patients may be at increased risk of digoxin toxicity, especially cardiac toxicity, in part owing to a decreased volume of drug distribution.
Patients with chronic lung disease, amyloid heart disease, and other conditions may also be at increased risk of digoxin toxicity.
In most older patients with relatively normal renal function, a digoxin dose of 0.125 mg daily is usually sufficient to achieve a therapeutic effect. Patients with renal impairment or small body habitus may require a lower dose. Serum digoxin concentration should be measured 2 to 4 weeks after initiating therapy, and periodically thereafter, to ensure that the levels are not supratherapeutic which can be toxic given digoxin’s narrow therapeutic index. It is worth noting that older adults can develop toxicity even at “therapeutic” levels of digoxin. It is therefore important to remain vigilant about signs and symptoms that may indicate digoxin toxicity such as worsening arrhythmias and/or heart block, gastrointestinal symptoms like nausea/vomiting, and/or confusion. Since diuretic-induced hypokalemia and hypomagnesemia potentiate digoxin’s cardiotoxic effects, including proarrhythmia, it is important to maintain normal serum concentrations of these electrolytes in all patients receiving digoxin. All of these factors must be considered when weighing the risks and benefits of digoxin, especially among older adults with low body weight and fluctuating renal function.
Ivabradine Ivabradine is a new therapeutic agent that selectively inhibits the If current in the sinoatrial node, resulting in heart rate reduction. The SHIFT study, a double-blind randomized trial, demonstrated a reduction in the composite endpoint of cardiovascular death or HF hospitalization with ivabradine compared to placebo. The benefit of ivabradine was driven by a reduction in HF hospitalization and not by CV death. All subjects enrolled had a left ventricular ejection fraction (LVEF) less than or equal to 35% and were in sinus rhythm with a resting heart rate of more than or equal to 70 bpm. The target of ivabradine is heart rate slowing (the presumed benefit of
action), but only 25% of patients studied were on optimal doses of β-blocker therapy. Given the well-proven mortality benefits of β-blockers, it is important to initiate and up titrate these agents to target doses, as tolerated, before assessing the resting heart rate for consideration of ivabradine.
Calcium channel blockers Non-dihydropyridine calcium channel blockers, including nifedipine, diltiazem, and verapamil, are contraindicated in patients with HFREF because each of these agents has been associated with adverse clinical outcomes. The third-generation calcium channel blockers amlodipine and felodipine have been studied in prospective randomized trials involving patients with HFREF. Although the Prospective Randomized Amlodipine Survival Evaluation (PRAISE) suggested that amlodipine might be beneficial in patients with nonischemic HFREF, this was not confirmed in PRAISE-2. Similarly, the V-HeFT-3 trial failed to demonstrate a significant benefit in patients with HFREF treated with felodipine. Thus, there are no approved indications for the use of calcium channel blockers in patients with HFREF, and their use in this condition is not recommended. However, in patients with heart failure and active anginal symptoms not controlled with β- blockers and nitrates, the addition of a long-acting calcium channel blocker is reasonable. Similarly, diltiazem or verapamil may be used in heart failure patients with rapid atrial fibrillation who do not respond adequately to β- blockers and other interventions.
Antithrombotic the rapy Patients with left ventricular systolic dysfunction are at increased risk for thromboembolic events, including stroke. However, in the absence of atrial fibrillation, rheumatic mitral valve disease, or a history of prior embolization, the value of antithrombotic treatment for the prevention of embolic events is unproven. In the Warfarin and Antiplatelet Therapy in Chronic Heart Failure (WATCH) trial, 1587 patients with NYHA class II or III HFREF were randomized to receive aspirin 162 mg/day, clopidogrel 75 mg/day, or warfarin to maintain an international normalized ratio (INR) of
2.5 to 3.0. After a mean follow-up of 23 months, there were no differences between the three groups in the primary composite end point of death, myocardial infarction, or stroke. Hospitalizations for heart failure occurred more frequently in the aspirin group than with either clopidogrel or warfarin, whereas bleeding complications were more common with warfarin. The mean age of patients in the WATCH trial was 63; subgroup analysis by age has not been reported.
In the Warfarin versus Aspirin in Reduced Cardiac Ejection Fraction (WARCEF) study, 2305 patients with heart failure, a left ventricular ejection fraction less than or equal to 35%, and sinus rhythm were randomized to warfarin (target INR 2.0–3.5) or to aspirin 325 mg daily. The primary outcome was all-cause mortality, ischemic stroke, or intracerebral hemorrhage. The mean age was 61 and 80% of participants were men. After a mean follow-up of 3.5 years, there was no difference between groups in the primary outcome. Warfarin was associated with fewer ischemic strokes but more major bleeding events. Intracranial hemorrhage was infrequent and did not differ between groups. A subgroup analysis showed patients older than 60 years of age did not benefit from warfarin over aspirin on the primary outcome; and when major hemorrhage was included as part of a composite outcome, the adverse event rate was significantly higher for warfarin.
Based on currently available data, anticoagulation with warfarin to achieve an INR of 2 to 3 is recommended in heart failure patients with chronic or paroxysmal atrial fibrillation or atrial flutter, rheumatic mitral valve disease with left atrial enlargement, prior stroke or unexplained arterial embolus, a mobile left ventricular thrombus (as demonstrated by echocardiography or other imaging modality), or a left atrial appendage thrombus identified by transesophageal echocardiography. Routine use of warfarin in other circumstances is not recommended. In patients with nonvalvular atrial fibrillation, one of the newer oral anticoagulants (dabigatran, rivaroxaban, apixaban, edoxaban) may be used as an alternative to warfarin. Careful attention should be paid to recommended dosing adjustments for these drugs in the setting of renal insufficiency and/or advanced age to optimize benefits and risks. (See Chapters 22, 75, and 96 for more details.)
Aspirin is justified in patients with known coronary heart disease, particularly those with recent myocardial infarction, unstable angina, percutaneous coronary intervention, or bypass surgery. Aspirin is also recommended for older patients with peripheral arterial disease or diabetes. In addition, aspirin is appropriate in high-risk patients with atrial arrhythmias who are not suitable candidates for warfarin. As noted previously, additional study is needed to determine the value of aspirin in older patients with heart failure without established vascular disease or diabetes.
Device Therapy
Device therapy, including implantable cardioverter-defibrillators (ICDs) and cardiac resynchronization therapy (CRT) and the mitral clip, is playing an increasing role in the management of patients with HFREF. ICDs reduce mortality from sudden cardiac death in patients with NYHA class II to III HFREF and a left ventricular ejection fraction less than or equal to 35% (primary prevention), and in patients resuscitated from cardiac arrest attributable to ventricular tachyarrhythmias (secondary prevention).
However, although current HF guidelines do not incorporate age into the recommendations for ICD therapy, very few patients greater than or equal to 75 years were enrolled in clinical trials evaluating these devices. In addition, a comprehensive meta-analysis suggested that the benefit of ICDs declines with age, most likely due to competing risks for mortality. Patients with life expectancies of less than 12 to 18 months are unlikely to benefit from an ICD, and patients greater than or equal to 80 years are twice as likely as younger patients to experience major complications related to device implantation. Thus, the benefit-to-risk relationship is modified by age, and consideration of ICD therapy must be individualized based on life expectancy, prevalent comorbidities, and patient goals of care using a process of shared decision-making. In patients who choose to undergo placement of an ICD, management of the ICD at end of life, including circumstances under which the patient would want to have the defibrillator portion of the device disabled in order to avoid painful shocks, should be clearly articulated prior to implantation and at routine clinic visits after implantation. Similarly, if a generator change is needed due to battery depletion, the option of foregoing the procedure, along with the implications of this decision, should be discussed.
In contrast to ICDs, which reduce the risk of sudden death but do not improve quality of life, CRT improves symptoms, exercise tolerance, quality of life, and survival in carefully selected patients with HFREF, including octogenarians. CRT involves placement of a biventricular pacemaker with one lead in the right ventricle and a second lead inserted into the coronary sinus in a retrograde fashion to pace the left ventricle. As the name implies, the goal of CRT is to “resynchronize” myocardial contraction, thereby increasing stroke work, ejection fraction, and cardiac output. CRT is indicated in patients with NYHA class II to IV HFREF, left ventricular ejection fraction less than or equal to 35%, and QRS duration greater than or
equal to 150 milliseconds by electrocardiogram. Patients with left bundle branch block, which is present in 20% to 30% of patients with HFREF, derive the greatest benefit from CRT, and there is evidence that the benefits tend to be greater in women than in men. CRT can be performed with or without concomitant ICD therapy (CRT-D and CRT-P, respectively), and patients greater than or equal to 80 years are proportionately more likely than younger patients to receive a CRT-P device. As with ICDs, selection of patients for CRT should involve shared decision-making with appropriate consideration of the potential salutary effects on quality of life in older patients who are significantly limited by persistent heart failure symptoms despite optimal medical therapy.
The MitraClip is a transcatheter-based technology which grasps both the anterior and posterior mitral valve leaflets, thereby reducing mitral regurgitation (MR) by increasing the coaptation between the regurgitant valve leaflets. In the COAPT trial among patients with HFREF (n = 614) and moderate-to-severe or severe secondary mitral regurgitation who remained symptomatic despite the use of maximal doses of guideline-directed medical therapy, transcatheter mitral-valve repair resulted in a lower rate of hospitalization for heart failure and lower all-cause mortality than medical therapy alone. Many patients with HFREF have load-dependent mitral regurgitation that is no longer significant after optimization of medical therapy (diuretics and afterload reduction). Accordingly, such an intervention should only be performed after careful evaluation by a multidisciplinary heart team including a geriatrician.
Treatment of Heart Failure With Preserved Ejection Fraction
Even though more than 50% of older patients with heart failure have preserved left ventricular systolic function (ie, HFPEF), large-scale clinical trials have yet to clearly demonstrate major beneficial effects for any pharmacologic agents except for SGLT2 inhibitors (Table 76-11). As a result, therapy for HFPEF remains largely empiric, except for those with transthyretin cardiac amyloidosis as the cause.
TABLE 76-11 ■ TRIALS FOR HEART FAILURE WITH PRESERVED EJECTION FRACTION
1י1ן;י ARMJ\CoזHE!i:All''ו'
L\l'f:�l'KIIMEr,N ,יiGE Ml:At.l
[SDOR RANGEt' IS.fכ 011: RAN_(.EI
• • 1 וlו' '1
P.!'ו:,CH tב
c RM.
Prג�c�
85(.)
M12ב
i' /וו(IQ11ril
C11ndesnrוsוינ
6$ {56-יM)
5'1,,1: �
וs (וג-:J'9}ט Ql11/!וo iml�i,;,נ�by ן y-l-�lז<l.69(Q,47-[,QI,
{'י - ctcו55). HF hospil.גlizaliou by 1 ץ-H it 0,,63-
(0.-11-!כ.97, p �o.oo3J
rנד :1: ן I CV deat/1{'1-Iן; adוru55km-l-lit 0.8,(90.77-i.Oנ, P, ll.1 ll$). lיfF.ildובדj. �iaח�J-JR·o..as{O.n-1.גגו. p= 0.,(172)
1-PRl::SElt\'E
SENJOi� (b!; >
35¾ $(Llכ1!rot1pג
TOPGAT
Aldo-DHF
Rו;/.,AX
E -DJ-fן::
4.128 o'-1::1
311,נs
422
216
]'12
-+
lrro� rtבוג Mכi,rolal
Spiזo11Ql�cl.o11e
Splrornכ!:ג.ttoנ1e Silden:i,(il
Sifru,;;scntaח
�0::!:9
J9 בt ] 0
S6 {51-61)
67 ;t S
60 (56.-(>§)
,1!,ן ב; 12.
יt-
72:!:7
76 i: 5
u9 (61-76)
67 ;t3
(ב9 (d.:-17)
65:!: YO
[)tatlll.l\ospii�li ti01וב-Hlt 0.95 O.M-1.05, p - 0.35)
11c.נrw dMh/CV loosplt;:ןlii!'..:1tion-HR 0.81
(0.Gב--.1.0, J
Cי1 ilealJגll-f P ho�;pjbliL'ttionlaboזteיd SCD-1-JR
G,S.'J {0.77-1.M,p IJ.14),
liוד l1GS{)il,1JJz.וtio1ו-HR0.83,(0.69-{);99,ןt- C,,MI
Red�� Et,t :111•g נ ,3•'1><0.001)
א[) dJffe.-�nce in A VO. peak נL :1<1, ,יvt
Mc-dicוi.rן 43'� rclaitlי.i.::' ;1,.ci·= iח Btוghlqנר trc,ר.dmill
liדוז�51?"' (1,lכמl
IJIG A11dנlגry
. V.'EDTC.. RMM-P·F ELANDD
9.86
l 13
44
I 16
Digיuxin
{. חiedilol Epl-eיreונ,o r rי{fblvole1!
55_11
בבי 45
62
t,2,6
�7 _ 10,
66 (48.-84}
ו o
66
liF hospit.1lj1atioo-lil1.0.79' (0.59-1.o-4, p = 0.00). Hospit.ג.li1.aגtio.11for uווsfubjt .יו.r�,giוו�-1-וR 1.37 (Q.99-1.9i, p = 0.°'6)
, [) cfl-ccl oח pו-jרןן,,ןry <c()וןןpctSl:I�eו11d p[)iחl9r di,a.
,sli:בlic ftוnciiorו: irn'lprovcd.שA \\"illו.c rva1iltו]
o clt'rc[ ().rו 6-נrווiוi �,·nlk di�tuוו.:-�1.דכll��•• lעrrt()V(.י'c
<tnd E/e' i111pו:m·�d \"il./:1 cp!er�t1ur1e
'() e erlol'I 6-mוn Vיi 1k dist;ווtcי.e,1te.ג:k:V01, or
q1t.1iryoflifi!.
IN DfF..1-IFpF.PJ וos
lnntו:;oףin
חitז i'I�
,64
(,'}
l)jd �()! Fו:'Ji111lt in S.!g.liit1ו:-n,ni lmprQVcnii;.nt in
�-1rו:isc .:i'וpמciוy
C 1�A 1·1·ץ ] 6
PARAQON•Hl'1 | 4-82'2 | :i.cubitו1 | 57 | 73, | Dld וכol ri:isובlt iכt ill slgni.ficגגitiy lower r:נ:te o | |
v:ו l:ו�rLוו:ו | 101·al h�ן)it:;ill�ttrr,11c1l."nr heart f:וi [11re a111d d��tlו | |||||
fr וn c-nז-di v��ו:u1ar .:1111, ,��arnו:1rנ3 ננם:וlו::nt ith | ||||||
lוc11rt (!.li,ture n11d 11.rו cjסcticכ fr11cli01נ uf 4:,% Q� | ||||||
lוig/ii:'r | ||||||
'50CRAן·ffi. PRE BRVשכ | 4i'7 | Ve-odgua[ | �.וs· | 73 | lכid ונCii /1rn.ד:g� N1-pruBNP.גnd Je-fi allc1al volum • :ו! 11 wk �<1mpגrecl wlt11pl:ג,eb,o,but \�;S;גssoctated . ג.'iJth Mmpr(IV,eו�ןetזts in q11J111ty rnlifc | |
EDJFY | 179 | l�bmdiחc | /jl_l | גו | .ןן R red11�1ioת -.י\iil; iV::1l!בזlliו1ie ..-liti oווז i w <J(ttrQגnr-s | |
�PliltOR• | 15'988 | Eוווp;:ig1iתoll.n | ;,4j) | 72 | R/JJ:l11a.ייd Uooי טש1. 1bi d rJsk MCV 1t ;ן.eיthזונ | |
Pr�.srםtviNI | ho.spיlr.וllz:נ.lion | |||||
Pres,נr�-Hr | :1,2.4 | מap:וglifuzin | ;:45 | מו | 12: wk יי.נf d:וp:ig'I l'mi11 �!gחllic:inו1y !Qרpזnvecl 1ו:וti�111 | |
�)"lסjנklננ1. , plוy�it:al liנדו\ta!/1נזו�. nןןd |
HI�pliF
P:;וזlii;igu tוl 64
70 Dic:i ו/Qots\gnilict1111I, iווזpוזmכ� Y02 frQ.in
lכ.iseיllne l.o wiיtיk 12
i::ג:
-�-
ercwc 111n l'iuת
At least 70% to 80% of older persons with HFPEF have hypertension, and coronary and valvular heart diseases are also highly prevalent in this population. Treatment for HFPEF begins with aggressive management of hypertension to target levels. Although there is limited data on the ideal blood pressure targets for patients with HFPEF, it may be reasonable to apply the recent observations from SPRINT and recommend systolic blood pressure less than 130 mm Hg and diastolic blood pressure less than 90 mm Hg for most ambulatory, community dwelling, older adults. This target should be personalized on an individual-level, however, accounting for the potential for increased risk of falls in older adults with HFPEF, a subpopulation with a high prevalence of frailty and who frequently take diuretics which can increase risk for orthostatic hypotension. Myocardial ischemia should be treated with antianginal medications and/or coronary revascularization as indicated. Resting and exercise heart rate should be adequately controlled in patients with atrial fibrillation. Patients with severe valvular heart disease should be considered for valve repair or replacement, and less severe regurgitant valvular lesions should be treated with vasodilators, such as ACE inhibitors. As with HFREF, nonpharmacologic aspects of therapy, including regular physical activity and exercise as described earlier, should be
appropriately addressed. This, perhaps, is paramount in HFPEF given the paucity of data to date demonstrating beneficial effects from most of the pharmacologic approaches outlined below.
Diure tics Diuretics are an essential component of therapy for the relief of pulmonary and systemic venous congestion in most patients with HFPEF. However, such patients are often “volume sensitive.” As a result, overly zealous diuresis can lead to a reduction in left ventricular diastolic volume, with a resultant decline in stroke volume and cardiac output, often manifested by increased fatigue, relative hypotension, and worsening prerenal azotemia. Thus, diuretics must be titrated judiciously to relieve congestion while avoiding over diuresis.
β-Blocke rs β-Blockers have little or no direct effect on diastolic function, but theoretically could improve symptoms in HFPEF by slowing heart rate and lengthening the diastolic filling period. However, chronotropic incompetence, or the inability to sufficiently increase the heart rate during exercise, is common in HFPEF and may be exacerbated by β-blockers.
Effective blood pressure control may aid in the regression of left ventricular hypertrophy if present, but other antihypertensives may be more effective in this regard. When examining the effects of β-blockers on individuals with left ventricular ejection fraction of at least 50% from clinical trials to date, the benefits of β-blockers are not observed and in fact the data suggest an increase in all-cause mortality, although this was not statistically significant. A recent observational study of patients with HFPEF from the TOPCAT study also suggested harm from β-blockers, with increased rates of heart failure hospitalization observed in patients taking β-blocker at baseline. On the other hand, patients with HFPEF frequently contend with coronary artery disease and atrial fibrillation, where β-blockers have previously demonstrated benefit. Whether to continue/initiate a β-blocker is challenging; and deprescribing β-blockers in this setting is also not well-studied.
Accordingly, until more data on this topic become available, decisions should be individualized based on the presence of other cardiovascular conditions where β-blockers are indicated, and consideration of the baseline heart rate, preexisting conduction disease, and possible side effects related to β-blocker use.
ACE inhibitors ACE inhibitors may improve symptoms in HFPEF both directly (by improving diastolic function) and indirectly (by promoting regression of
left ventricular hypertrophy). The use of ACE inhibitors for the treatment of HFPEF in patients of advanced age is supported by findings from the Perindopril in Elderly People with Chronic Heart Failure study, in which 850 patients greater than or equal to 70 years (mean age 76, 55% women) with heart failure and estimated ejection fraction greater than or equal to 40 were randomized to perindopril 4 mg once daily or placebo and followed for an average of 2.1 years. Overall, there was no significant difference between groups with respect to the primary outcome of death or unplanned hospitalization for heart failure. However, heart failure hospitalizations were significantly reduced by 78% during the first 12 months of follow-up in patients randomized to perindopril. Relative to placebo, perindopril-treated patients also experienced significant improvements in NYHA class and exercise tolerance during the first year of therapy. Perindopril is not approved for the treatment of heart failure in the United States, and none of the other ACE inhibitors are approved for the treatment of HFPEF; however, given that the benefits of perindopril may be a class effect, the use of ACEI may be reasonable in HFPEF, especially when blood pressure is elevated and/or patients have other indications for an ACEI such as diabetes.
Angiotensin II receptor blockers ARBs lower blood pressure and may have salutary effects on diastolic function like those observed with ACE inhibitors. In the CHARM-Preserved Trial, 3024 patients with NYHA class II to IV heart failure and an ejection fraction greater than 40% were randomized to candesartan or placebo and followed for a median of 37 months. The mean age was 67, 27% were 75 years or older, and 40% were women. Mortality did not differ between groups, but patients randomized to candesartan experienced a significant 16% reduction in the risk of hospitalization for heart failure and 29% fewer total heart failure admissions. Subgroup analysis by age was not reported. In large part due to this study, ARBs are now considered reasonable for use to prevent hospitalizations for HFPEF as per the AHA/ACC heart failure guidelines (last updated 2017).
MRAs MRAs spironolactone and eplerenone reduce myocardial hypertrophy and fibrosis in laboratory animals and small studies indicate that they have a favorable effect on left ventricular diastolic function in humans. In addition, as discussed previously, both agents have been shown to improve mortality and other outcomes in patients with HFREF. In a recently published trial, 3445 patients with symptomatic heart failure and an ejection fraction greater than or equal to 45% were randomized to spironolactone or placebo and
followed for a mean of 3.3 years. The average age was 69 and 52% were women. The primary outcome, a composite of cardiovascular death, aborted cardiac arrest, or hospitalization for heart failure, did not differ between patients randomized to spironolactone versus placebo. Similarly, total mortality and all-cause hospitalizations were not different between groups. However, hospitalizations for heart failure were reduced 17% among patients randomized to spironolactone (p = 0.04). Given some concerns about the study population and study conduct in Europe, post-hoc analyses have been conducted on patients from just North and South America. These data show that spironolactone was associated with a significant 18% reduction in the primary end point, as well as a 26% reduction in cardiovascular mortality and 18% reduction in heart failure rehospitalization. Thus, although additional research is needed, this study suggests that spironolactone may be beneficial for patients with HFPEF. In fact, the FDA is considering approval of MRA therapy for HFPEF based on these data.
Calcium channel blockers Calcium channel blockers decrease intracellular calcium and may have a modest beneficial effect on diastolic function. However, there have been no large clinical trials evaluating calcium channel blockers for the treatment of HFPEF. While calcium channel antagonists are not specifically indicated for the treatment of this condition, they may be helpful to treat other concurrent conditions common in adults with HFPEF such as atrial fibrillation. Caution should be exercised; however, given their potential to worsen cardiac output by impairing chronotropy and inotropy.
Nitrates In addition to relieving ischemia, nitrates are effective venodilators and thus lower pulmonary capillary wedge pressure. For these reasons, nitrates may serve as a useful adjunct to diuretics in relieving symptoms of pulmonary congestion, particularly orthopnea. However, nitrates also have the potential for decreasing venous return to the heart, thereby reducing left ventricular diastolic volume and stroke volume. In addition, tolerance to the hemodynamic effects of nitrates occurs in many patients. A randomized crossover trial of isosorbide mononitrate in subjects with HFPEF did not demonstrate better quality of life or submaximal exercise capacity and demonstrated that nitrates might lead to reduced activity among adults with HFPEF compared to placebo. As a result, use of nitrates for the routine management of HFPEF is not recommended.
Digoxin Digoxin, as well as other inotropic agents, may exert a favorable effect on diastolic function by accelerating calcium reuptake by the sarcoplasmic reticulum at the onset of diastole. In the original DIG trial, 988 patients with heart failure and an ejection fraction of more than 45% were randomized to digoxin or placebo in an ancillary study. As in the main trial, digoxin had no effect on mortality. Hospitalizations for heart failure were reduced in patients with HFPEF receiving digoxin, but this effect was counterbalanced by increased hospitalizations for acute coronary syndromes. Thus, digoxin does not appear to be beneficial in patients with HFPEF and is not recommended except as an adjunct for controlling heart rates in patients with atrial fibrillation.
ARNI While sacubitril/valsartan has demonstrated dramatic benefits in HFREF, its efficacy in HFPEF is less dramatic. In the prospective comparison of ARNI with ARB on management of heart failure with preserved ejection fraction (PARAMOUNT) trial of 301 HFPEF patients, ARNI therapy resulted in lower NTproBNP levels after 12 weeks than valsartan alone. However, in the phase III PARAGON trial among 4822 patients with NYHA class II to IV heart failure, ejection fraction of 45% or higher, elevated level of natriuretic peptides, and structural heart disease, sacubitril-valsartan did not result in a significantly lower rate of total hospitalizations for heart failure and death from cardiovascular causes.
Notably in this large trial, there was heterogeneity of treatment effects with possible benefit with sacubitril-valsartan in patients with lower ejection fraction and in women. The FDA has recently approved sacubitril/valsartan therapy for patients with chronic heart failure regardless of ejection fraction, while noting that the drug is most effective in patients with a reduced ejection fraction.
Othe r agents For patients diagnosed with transthyretin cardiac amyloidosis, tafamidis, a TTR stabilizer, was approved by the FDA and EMU based on the ATTR-ACT trial which showed a lower all-cause mortality vs placebo (29.5% vs 42.9%) and a 32% lower risk of cardiovascular hospitalizations in those treated with tafamidis compared to placebo. Notably, subjects with NYHA class III symptoms in ATTR-ACT had higher rates of cardiovascular- related hospitalization with tafamidis therapy compared to placebo, emphasizing the importance of early diagnosis and treatment. In the ATTR- ACT trial, decline in the distance covered on 6-minute walk test and in the Kansas City Cardiomyopathy Questionnaire overall summary score was
slowed with tafamidis therapy. Tafamidis has two formulations, tafamidis meglumine (20 mg capsules, dose 80 mg daily) and tafamidis free salt (61 mg capsule daily), the latter of which was formulated for patient convenience as a single-dose capsule. These formulations are bioequivalent, though are not substitutable on a per-milligram basis. The high cost (list price of $225,000 per year) could limit access.
Preliminary studies of phosphodiesterase 5 inhibitors suggested that these agents may have favorable effects on exercise capacity in patients with HFPEF. However, in the RELAX trial, which randomized 216 patients (mean age 69, 48% women) with heart failure and a left ventricular ejection fraction greater than or equal to 50% to sildenafil or placebo for 24 weeks, sildenafil did not result in significant improvements in exercise capacity or clinical status compared to placebo.
Endothelin type A receptor antagonists have also shown promise for treating HFPEF in preliminary studies. In the Effectiveness of Sitaxsentan Sodium in Patients with Diastolic Heart Failure (ESS-DHF) trial, 192 patients (mean age 65, 63% women) with HFPEF and a left ventricular ejection fraction greater than or equal to 50% were randomly assigned in a 2:1 ratio to receive sitaxsentan or placebo for 24 weeks. The primary outcome was change in treadmill exercise time; secondary outcomes included changes in left ventricular mass, diastolic function, symptom severity, and quality of life. Sitaxsentan therapy showed modest improvement in treadmill exercise time relative to placebo (37 seconds; p = 0.03), but there was no effect on any of the secondary outcomes. A recent study examining macitentan in adults with HFPEF and concurrent pulmonary hypertension was recently completed, with results anticipated in 2021.
SGLT2 inhibitors have been investigated in two large trials, DELIVER (evaluating dapagliflozin) and EMPEROR-Preserved (evaluating empagliflozin) in patients with HFPEF. Emperor-Preserved assigned 5988 patients with class II–IV heart failure and an ejection fraction of more than 40% to receive empagliflozin (10 mg once daily) or placebo. Over a median of 26.2 months, the primary endpoint of death or hospitalization for heart failure was lower in the empagliflozin arm (hazard ratio, 0.79; 95% confidence interval [CI], 0.69 to 0.90; P<0.001) mainly related to a lower risk of hospitalization for heart failure. The effects of empagliflozin appeared consistent in patients with or without diabetes.
Summary Studies published to date indicate that ACE inhibitors, ARBs, MRAs, and more recently ARNi may have favorable effects on some outcomes in patients with HFPEF, with only ARNi and SGLT2 therapy approved by the FDA for subjects with an EF greater than 50%. In subgroup analyses of randomized trials, ARNi therapy appears to have a more favorable effect in patients with EF of 40% to 50%, but this group may be more similar in pathophysiology and treatment responsiveness to HFREF. SGLT2 inhibitors can be considered to reduce the risk of hospitalization for heart failure. Consequently, management of HFPEF should include aggressive treatment of the underlying cardiac disease, and a diuretic should be administered at low-to-moderate doses to relieve congestion and edema. The addition of an ACE inhibitor, ARB, MRAs, ARNi, SGLT2 inhibitor, or β-blocker to improve symptoms and reduce the risk of hospitalization may be reasonable in some cases, but this will require individualization. Given lack of consistent data on its benefits, alternative treatment should be considered if/when these agents are not tolerated or lead to worse outcomes. Additional studies to evaluate novel therapies are ongoing and represent an important opportunity for older adults to participate and ensure that they are well- represented in these trials.
Isolated Right Heart Failure
While the most common cause of chronic right-sided heart failure is one or more abnormalities of left heart function, a small proportion of patients present with isolated right heart failure. Etiologies of isolated right heart failure in older adults include pulmonary arterial hypertension due to chronic lung disease, chronic pulmonary thromboembolic disease, sleep-disordered breathing, primary pulmonary vascular disease, and disorders of the tricuspid or (less commonly) pulmonic valve (eg, infectious endocarditis, carcinoid heart disease). Rarely, right heart failure in older patients may be attributed to congenital heart disease (eg, atrial septal defect), neoplasm (eg, right atrial myxoma or rhabdomyosarcoma), or a primary cardiomyopathy involving the right ventricle (eg, arrhythmogenic right ventricular dysplasia). Acute right heart failure may be due to right ventricular infarction, massive or sub-massive pulmonary embolism, or severe lung disease (eg, pneumonia, acute respiratory distress syndrome). Symptoms of right heart failure include dyspnea, impaired exercise tolerance, dependent edema, and, in severe cases, abdominal discomfort and swelling. The physical examination is
notable for signs of elevated right-sided pressures (jugular venous distension, abdominojugular reflux, right ventricular heave, hepatomegaly), lower extremity edema, and possibly ascites. Depending on the etiology, other symptoms and signs may be present. Treatment is directed primarily at the underlying cause(s) and secondarily at alleviating systemic congestion through the judicious use of diuretics. The value of other pharmacologic agents, such as β-blockers and renin-angiotensin system inhibitors, for the treatment of isolated right heart failure is unknown.
Advanced Heart Failure
Refractory or advanced heart failure may be defined as heart failure not amenable to primary corrective measures (eg, valve replacement or revascularization) and not responsive to aggressive nonpharmacologic and pharmacologic therapy as described earlier. However, before designating heart failure as refractory, it is important to perform a careful search for potentially treatable causes, to carefully review the patient’s medication regimen to ensure that therapy is optimal, and to discuss the patient’s diet and medication habits in detail with the patient and family to ensure that an appropriate level of adherence is being maintained. The latter issue is of particular importance, since many cases of refractory heart failure can be traced to nonadherence to dietary restrictions, medications, or both.
In most cases, refractory or advanced heart failure simply represents the final common pathway of end-stage heart disease. Under these circumstances, the value of highly aggressive treatment is questionable, and decisions regarding the appropriateness of specific therapeutic interventions must be made on an individualized basis (see also Chapters 7 and 67).
In patients with persistent pulmonary congestion or peripheral edema, high-dose oral diuretics (eg, furosemide 200 mg BID or bumetanide 10 mg daily), alone or in combination with metolazone, may be effective.
Alternatively, a continuous intravenous infusion of furosemide 5 to 40 mg/h or bumetanide 0.5 to 1 mg/h may facilitate diuresis.
The use of intravenous inotropic agents in the management of chronic heart failure is somewhat controversial since these agents have not been shown to improve outcomes and they may increase the risk of life-threatening arrhythmias. Nonetheless, extensive clinical experience indicates that continuous infusions of dobutamine or milrinone may reduce symptoms and improve quality of life in selected patients with refractory heart failure. The
use of intravenous inotropic agents at home can also be used for select patients for the purposes of bridging them to advanced therapies or for the purposes of palliation.
As noted earlier, CRT has been shown to improve symptoms, quality of life, and survival in patients with advanced heart failure and left bundle branch block or marked intraventricular conduction delay on the 12-lead electrocardiogram. This procedure should, therefore, be considered in appropriately selected patients with persistent class III or IV heart failure symptoms.
An emerging therapy for patients with end-stage refractory heart failure (primarily HFREF) is mechanical circulatory support through implantation of a left ventricular assist device (LVAD). LVADs improve symptoms, exercise tolerance, quality of life, and survival in selected patients with severe heart failure, including patients in their 70s and early 80s. Although LVADs were originally developed as a bridge to heart transplantation, with technological advances they are now commonly implanted as “destination therapy” in patients who are not transplant candidates. As a result, an increasing number of older adults are receiving LVADs, and this trend is likely to continue as the technology evolves. Older adults are at increased risk for gastrointestinal bleeding following LVAD implantation; other potential complications include infection, stroke, and pump thrombosis. Optimal patient selection is critical, and patients with advanced comorbidities or frailty may not be suitable candidates. To this end, a thorough discussion of goals of care, facilitated by a palliative care team consultation, (which is required by CMS guidelines to be formally part of the LVAD team), is recommended as an integral component of the evaluation for LVAD therapy.
Heart transplantation is a highly effective therapy for patients with advanced heart failure, but its use is limited by the paucity of donor hearts. In part due to limited organ availability coupled with issues of immunosuppression in older adults and the increased risk of infection, most transplant centers exclude patients older than 70 to 75 years. Nonetheless, among carefully selected patients greater than or equal to 65 years undergoing heart transplantation, outcomes are favorable and generally similar to those in younger patients.
PROGNOSIS
The long-term prognosis in patients with established heart failure is poor, and the 5-year survival rate among older adults is less than 50%. In patients greater than or equal to 80 years old hospitalized with heart failure, fewer than 25% survive more than 5 years. In general, the prognosis is worse in men than in women and in patients with an ischemic rather than nonischemic etiology. Patients with more severe symptoms or exercise intolerance, as defined by the NYHA functional class or as assessed by a 6-minute walk test, also have a less favorable outlook. Other markers of an adverse prognosis include elevated BNP; low systolic blood pressure; hyponatremia; renal insufficiency; anemia; peripheral arterial disease; cognitive dysfunction; and the presence of atrial fibrillation or high-grade ventricular arrhythmias. In patients with chronic heart failure, 40% to 50% die from progressive heart failure, 40% die from arrhythmias, and 10% to 20% die from other causes (eg, myocardial infarction or noncardiac conditions).
Notably, the proportion dying from noncardiac causes rises with advancing age owing to other comorbid conditions and the concurrence of geriatric conditions such as cognitive impairment and frailty. As a means to embed prognosis into medical decision-making, it may be reasonable to apply the domain management approach to caring for older adults with heart failure, where multiple domains of health across medical (multimorbidity, polypharmacy, malnutrition), mind/emotion (depression, anxiety, cognitive impairment), functional (frailty, impaired mobility, functional impairment, history of falls), and social environment (social support, financial means) are considered in the care of older adults.
ADVANCE CARE PLANNING AND END-OF-LIFE DECISIONS
Overall survival rates for patients with heart failure are lower than for most forms of cancer. In addition, once heart failure symptoms have reached an advanced stage (eg, NYHA class III or IV), quality of life is often severely compromised and therapeutic options are limited. Moreover, even patients with relatively mild or well-compensated heart failure are continually at risk of experiencing sudden cardiac arrest, and, if initial resuscitative efforts are successful, questions regarding life support and related issues may arise.
For these reasons, it is incumbent upon the physician to discuss the patient’s wishes regarding the intensity of treatment and end-of-life care at a
time when the patient is still capable of understanding the issues and making informed choices. In addition, since the patient’s views may evolve over the course of illness, these issues should be readdressed at periodic intervals.
The development of an advance directive and appointment of durable power of attorney should also be encouraged (see Chapters 7 and 26).
A related concern is the extent to which clinicians should offer aggressive or investigational therapeutic options that are unlikely to substantially alter the natural history of disease or significantly improve quality of life. This concern applies not only to many of the treatment modalities discussed in the Advanced Heart Failure section earlier, but also to such procedures as admission to an intensive care unit and endotracheal intubation. In many cases, these interventions not only fail to modify the clinical course but contribute to the patient’s pain and suffering in the terminal stages of disease. Moreover, the suggestion that a given intervention may help stabilize the patient and slow disease progression may create false hopes in the minds of the patient and family, and subsequent failure of the intervention may compound the emotional suffering that both the patient and the family are forced to endure. For these reasons, it is essential that the clinician realistically appraise the potential benefits and attendant risks, both physical and emotional, prior to offering aggressive therapeutic options that may provide little or no hope of improving the patient’s quality of life over a clinically important period of time. In this context, it is often appropriate to offer transition to a palliative care approach and to obtain consultation from a palliative care specialist.
Finally, as the patient approaches the terminal stages of disease, there should be discussions with the patient and family regarding where the patient would like to spend his or her final days. For many patients, the idea of dying at home surrounded by close family is comforting, and this desire should be honored whenever possible. Often home hospice affords optimal end-of-life care in the home environment by providing effective symptom control, as well as emotional, spiritual, and caregiver support. Home hospice is also associated with higher levels of patient and family satisfaction with care in most cases, though caregiver burden may be higher than in an inpatient setting. For some patients, the hospital or an inpatient hospice may be the preferred environment for terminal care, but an attempt should be made to secure a private room with open visitation hours. The intensive care unit,
with its austere, “high-tech” facade, may be the least desirable place to die, and this should be avoided whenever possible.
PREVENTION
In view of the exceptionally poor prognosis associated with established heart failure in older adults, it is essential to develop and implement preventive strategies. Appropriate treatment of hypertension has been repeatedly shown to reduce the incidence of heart failure by 50% or more. In the Hypertension in the Very Elderly Trial, for example, treatment of hypertension was associated with a 64% reduction in incident heart failure among patients 80 years or older, and, similarly, the intensive, less than 120 mm Hg arm in the SPRINT study was found to result in a 36% lower rate of acute decompensated HF compared to the standard, 140 mm Hg arm. (Additional details are in Chapter 79.) The St. Vincent’s Screening to Prevent Heart Failure (STOP-HF) and N-terminal Pro-brain Natriuretic Peptide Guided Primary Prevention of Cardiovascular Events in Diabetic Patients (PONTIAC) trials have shown that natriuretic peptide-based screening and targeted prevention can reduce heart failure and left ventricular dysfunction and other major cardiovascular events. Treatment of hyperlipidemia has also been shown to reduce the incidence of heart failure, most likely through prevention of myocardial infarction and other ischemic events. Likewise, smoking cessation and regular exercise reduce the risk of myocardial infarction and stroke in older adults and likely have similar effects on the development of heart failure. Unfortunately, despite abundant evidence that heart failure prevention is feasible through risk factor modification, such strategies are underused, especially in persons older than 80 years.
SUMMARY
Heart failure is a common and important clinical problem in older adults, owing, in large part, to the complex interplay between age-related changes in the cardiovascular system, the high prevalence of cardiovascular and noncardiovascular disorder in the older population, and the widespread use of certain drugs and other therapies that may adversely affect cardiovascular physiology. As the population continues to age, heart failure will have a progressively greater impact on health care delivery systems. The impact of heart failure on quality of life and independence in the growing number of
older adults with this disorder is incalculable. Thus, there is a compelling need to develop and implement strategies for the prevention and treatment of heart failure, with particular emphasis on the geriatric population.
ACKNOWLEDGEMENT
This chapter was based on the many previous versions authored by our colleague, Dr. Mike Rich, who not only taught us much of what we know about Geriatric Cardiology but has been an inspiring leader and is considered by many as the founder of our field. His mentorship has fostered a new generation of cardiologists and geriatricians dedicated to the care of older adults with cardiovascular disease and benefited countless lives.
FURTHER READING
Afilalo J, Alexander KP, Mack MJ, et al. Frailty assessment in the cardiovascular care of older adults. J Am Coll Cardiol. 2014;63(8):747– 762.
Anker SD, Butler J, Filippatos G, et al.; EMPEROR-Preserved Trial Investigators. Empagliflozin in heart failure with a preserved ejection fraction. N Engl J Med. 2021;385(16):1451–1461.
Beckett NS, Peters R, Fletcher AE, et al. Treatment of hypertension in patients 80 years of age or older. N Engl J Med. 2008;358:1887–1898.
Chaudhry SI, Wang Y, Gill TM, Krumholz HM. Geriatric conditions and subsequent mortality in older patients with heart failure. J Am Coll Cardiol. 2010;55:309–316.
Cleland JGF, Tendera M, Adamus J, Freemantle N, Polonski L, Taylor J. The perindopril in elderly people with chronic heart failure (PEP-CHF) study. Eur Heart J. 2006;27:2238–2245.
DeFilippis EM, Nakagawa S, Maurer MS, Topkara VK. Left ventricular assist device therapy in older adults: addressing common clinical questions. J Am Geriatr Soc. 2019;67(11):2410–2419.
Feltner C, Jones CD, Cene CW, et al. Transitional care interventions to prevent readmissions for persons with heart failure. A systematic review and meta-analysis. Ann Intern Med. 2014;160:774–784.
Flather MD, Shibata MC, Coats AJ, et al. Randomized trial to determine the effect of nebivolol on mortality and cardiovascular hospital admission in elderly patients with heart failure (SENIORS). Eur Heart J. 2005;26: 215–225.
Forman DE, Arena R, Boxer R, et al. American Heart Association Council on Clinical Cardiology; Council on Cardiovascular and Stroke Nursing; Council on Quality of Care and Outcomes Research; and Stroke Council. Prioritizing functional capacity as a principal end point for therapies oriented to older adults with cardiovascular disease: a scientific statement for healthcare professionals from the American Heart Association. Circulation. 2017;135(16):e894–e918.
Gorodeski EZ, Goyal P, Hummel SL, et al. Domain management approach to heart failure in the geriatric patient: present and future. J Am Coll Cardiol. 2018;71(17): 1921–1936.
Gurwitz JH, Magid DJ, Smith DH, et al. Contemporary prevalence and correlates of incident heart failure with preserved ejection fraction. Am J Med. 2013;126:393–400.
Heidenreich PA, Albert NM, Allen LA, et al. Forecasting the impact of heart failure in the United States: a policy statement from the American Heart Association. Circ Heart Fail. 2013;6:606–619.
Homma S, Thompson JLP, Pullicino PM, et al. Warfarin and aspirin in patients with heart failure and sinus rhythm. N Engl J Med.
2012;366:1859–1869.
Jurgens CY, Goodlin S, Dolansky M, et al. Heart failure management in skilled nursing facilities: a scientific statement from the American Heart Association and the Heart Failure Society of America. Circ Heart Fail. 2015;8:655–687.
Lakatta EG, Levy D. Arterial and cardiac aging: major shareholders in cardiovascular disease enterprises: Part I: aging arteries: a “set up” for vascular disease. Circulation. 2003;107(1):139–146.
Lakatta EG, Levy D. Arterial and cardiac aging: major shareholders in cardiovascular disease enterprises: Part II: the aging heart in health: links to heart disease. Circulation. 2003;107(2):346–354.
Mentz RJ, Kelly JP, von Lueder TG, et al. Noncardiac comorbidities in heart failure with reduced versus preserved ejection fraction. J Am Coll Cardiol. 2014;64:2281–2293.
Nassif ME, Windsor SL, Borlaug BA, et. al. The SGLT2 inhibitor dapagliflozin in heart failure with preserved ejection fraction: a multicenter randomized trial. Nat Med. 2021;27(11):1954–1960.
Pitt B, Pfeffer MA, Assmann SF, et al. Spironolactone for heart failure with preserved ejection fraction. N Engl J Med. 2014;370:1383–1392.
Rich MW, Chyun DA, Skolnick AH, et al. Knowledge gaps in cardiovascular care of the older adult population: a scientific statement from the American Heart Association, American College of Cardiology, and American Geriatrics Society. Circulation. 2016;133(21):2103–2122.
Saczynski JS, Go AS, Magid DJ, et al. Patterns of comorbidity in older adults with heart failure: the Cardiovascular Research Network PRESERVE study. J Am Geriatr Soc. 2013;61:26–33.
Santangeli P, Di Blase L, Dello Russo A, et al. Meta-analysis: age and effectiveness of prophylactic implantable cardioverter-defibrillators. Ann Intern Med. 2010;153: 592–599.
Solomon SD, McMurray JJV, Anand IS, et al. PARAGON-HF Investigators and Committees. Angiotensin-neprilysin inhibition in heart failure with preserved ejection fraction. N Engl J Med. 2019;381(17):1609–1620.
Upadhya B, Stacey RB, Kitzman DW. Preventing heart failure by treating systolic hypertension: what does the SPRINT add? Curr Hypertens Rep. 2019;21(1):9.
Virani SS, Alonso A, Benjamin EJ, et al. American Heart Association Council on Epidemiology and Prevention Statistics Committee and Stroke Statistics Subcommittee. Heart Disease and Stroke Statistics-2020 Update: a report from the American Heart Association. Circulation.
2020;141(9):e139–e596.
Whelan DJ, Goodlin SJ, Dickinson MG, et al. End-of-life care in patients with heart failure. J Cardiac Fail. 2014; 20:121–134.
Yancy CW, Jessup M, Bozkurt B, et al. 2017 ACC/AHA/HFSA Focused Update of the 2013 ACCF/AHA Guideline for the Management of Heart Failure: A Report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines and the Heart Failure Society of America. J Card Fail. 2017;23(8):628–651.
Chapter
77
Cardiac Arrhythmias
Nway Le Ko Ko, Win-Kuang Shen
INTRODUCTION
This chapter is to provide an overview of conditions related to cardiac rhythm disorders with a focus on the older population. Terms used in this chapter such as “older patients” or “older population” are generally referring to patients older than 65 years unless otherwise stated based on specific referenced clinical investigations.
SYNCOPE
The incidence of syncope is high in the older population with a sharp increase in incidence after 70 years and is usually associated with poor outcome—there is a greater risk of hospitalization and death related to syncope in older adults. They are vulnerable to syncope due to age-related changes in cardiovascular and autonomic nervous system, comorbid conditions, polypharmacy, and decreased ability to conserve intravascular volume. In many instances, syncope is multifactorial in an older adult with many predisposing factors presenting simultaneously. Thence, a comprehensive multidisciplinary approach is often necessary for diagnosis and management. Guideline-directed evaluation and management of patients with syncope have been published by ACC/AHA/HRS (2017) and by ESC (2018). For the objectives of this chapter, pertinent conditions causing syncope in the older populations are discussed in this section.
Orthostatic Hypotension
Orthostatic hypotension (OH) is a common cause of syncope in the geriatric population, with a prevalence of 30% among those older than 75 years, and up to 50% among frail older adults living in nursing homes. OH is defined as a sustained decline of more than or equal to 20 mm Hg in systolic or more than or equal to 10 mm Hg in diastolic blood pressure upon standing. There are four types of OH (Table 77-1). OH can be caused by impaired autonomic reflexes resulting in pooling of blood upon standing, reduced vasoconstriction, and cerebral hypoperfusion with resultant syncope. An older adult has decreased heart rate responsiveness to postural changes and diminished baroreceptor sensitivity, which impair the ability to adapt to orthostatic stress. Also, reduced concentrations of plasma aldosterone, coupled with impaired thirst and polypharmacy (diuretics and vasodilators), place older patients at risk of volume depletion. Underlying autonomic insufficiency such as autonomic neuropathy, diabetic neuropathy, amyloidosis, or neurologic disorders like Parkinson disease (Shy-Drager Syndrome) should be considered in older patients presenting with recurrent orthostatic syncope. Postprandial syncope is a subtype of orthostatic syncope occurring within 30 to 90 minutes of food consumption resulting from pooling of blood in splanchnic circulation. Treatments include withdrawing offending medications, liberalization of salt and fluid intake, slowly rising from a supine position, avoidance of prolonged standing, wearing compression stockings and physical countermeasures like crossing legs when standing. Pharmacologic therapy includes midodrine or fludrocortisone to improve hypotension. Small and frequent meals as well as cold water ingestion are recommended to alleviate postprandial syncope and octreotide may be beneficial to those with recurrent postprandial syncope. These treatment options need to be individualized due to the frequent presence of comorbid conditions in older patients.
TABLE 77-1 ■ TYPES OF ORTHOSTATIC HYPOTENSION (OH)
Learning Objectives
Review guideline-directed management of syncope with a focus on conditions relevant to the older population.
Understand the etiologies of bradyarrhythmia, indications for permanent pacemaker (PPM) placement, and selection of the PPM mode.
Discuss the goals of rate and rhythm control and approach to anticoagulation in older patients with atrial fibrillation (AF).
Summarize the management of supraventricular tachycardia (SVT) in the older population.
Understand ventricular tachycardia (VT) in the older population along with indications for implantable cardioverter-defibrillator (ICD) for primary and secondary sudden cardiac death (SCD) prevention.
Key Clinical Points
1. Age-related changes throughout the heart and conduction system predispose older individuals to syncope, bradycardia, atrial fibrillation, and supraventricular and ventricular tachyarrhythmias.
Review indications for cardiac resynchronization therapy (CRT).
Syncope is common in the older population. It is a clinical manifestation associated with cardiac arrhythmias or other conditions altering cerebral perfusion causing transient loss of consciousness.
The indications for a PPM for treatment of bradyarrhythmia are similar in older and younger patients. More than 80% of permanent PPMs are placed in patients 65 years or older, with sinoatrial dysfunction being the leading indication for PPM implantation in this age group.
Compared with single-chamber ventricular pacing, dual-chamber pacing reduces the risk of AF but does not affect mortality or the risk of stroke.
Age greater than 65 years is a well-recognized risk factor for thromboembolism in patients with AF. Treatment for stroke prevention in patients with atrial fibrillation is based on the CHA2DS2-VASc risk stratification scheme.
In asymptomatic or mildly symptomatic patients with AF, a strategy of pharmacologic rate control and anticoagulation is associated with similar or better outcomes than a strategy of rhythm control.
In patients with symptomatic AF refractory to pharmacologic treatment, various catheter-based ablation procedures, as well as the surgical maze procedure, provide effective control of rate and/or arrhythmia in selected groups of older patients.
The indications for implantable cardioverter-defibrillator (ICD) and cardiac resynchronization therapy (CRT) are similar in older and younger patients, as are the benefits in terms of reducing mortality and improving symptoms. However, limited data are available on the outcomes from these devices in patients older than 80 years.
Neurocardiogenic Syncope (Vasovagal Syncope or VVS)
Vasovagal syncope (VVS) is the most common form of syncope in younger population, but it occurs not infrequently in older patients. The pathophysiology of VVS results from a reflex causing hypotension and bradycardia, triggered by prolonged standing or exposure to emotional stress, pain, or medical procedures. It is typically associated with a prodrome of diaphoresis, warmth, and pallor, and with fatigue after the event. These clinical characteristics are often more subtle or absent in older patients. Three types of vasovagal response are summarized in Table 77-2. Conservative, nonpharmacologic management (such as counter-pressure maneuvers, orthostatic training, liberalization of salt and fluid) may help, but no specific medical therapy has been proven widely effective. Pacemaker therapy may be beneficial in older patients with a predominant cardioinhibitory VVS. The 2017 ACC/AHA/HRS Guideline for the Evaluation and Management of Patients with Syncope recommends dual- chamber pacing as reasonable for patients older than 40 years with recurrent
VVS and spontaneous pauses. Closed loop stimulation (CLS) is a new pacing technology that detects local impedance changes in the right ventricle (RV), which may be related to RV preload and contractility. Early detection of impedance changes from the RV pacemaker lead to initiate pacing that may prevent the activation of cardioinhibitory VV reflex. Preliminary data from two recent clinical trials demonstrated significant reduction of recurrent syncope in patients randomized to CLS pacing.
TABLE 77-2 ■ THREE TYPES OF VASOVAGAL RESPONSE
Carotid Sinus Syndrome
Carotid sinus hypersensitivity (CSH) is common in the older population with prevalence estimated to be as high as 30% among older individuals presenting with unexplained falls. It is defined as greater than or equal to 3- second pause or a decrease in systolic blood pressure greater than or equal to 50 mm Hg during carotid sinus massage (CSM). Carotid sinus syndrome (CSS) is defined when CSH is associated with symptoms of syncope or presyncope. CSM should be a routine part of examination in older patients presenting with syncope, unless there is a carotid bruit or transient ischemic attack, stroke, or myocardial infarction within the prior 3 months.
Observational and randomized studies have shown that recurrent symptoms are significantly reduced after PPM implantation in patients with CSS. Dual- chamber pacing is recommended, although data are lacking from randomized trials. Newer pacing algorithms, such as the “rate-drop response” or “sudden-brady response,” which accelerates the pacing rate when bradycardia is detected, are available. However, the clinical utility of these newer algorithms has not shown to be superior to conventional PPMs.
Cardiogenic Syncope
Cardiogenic syncope is caused by arrhythmia (bradyarrhythmia or tachyarrhythmia) or hypotension due to low cardiac index (cardiogenic shock, reduced cardiac filling from cardiac tamponade or restrictive cardiomyopathy, or infiltrative cardiomyopathy such as amyloidosis, etc.) or blood flow obstruction (flow obstruction from valvular stenosis or
hypertrophic obstructive cardiomyopathy [HOCM]). Characteristics associated with increased probability of cardiac syncope are older age, male gender, presence of known heart disease (tachyarrhythmia, bradyarrhythmia, coronary artery disease [CAD], structural heart disease, reduced ventricular function, congenital heart disease), syncope with brief prodrome (eg, palpitation) or no prodrome, syncope during exertion or supine syncope, low number of previous syncopal episodes, and family history of sudden cardiac death (SCD). Treatments of syncope due to bradycardia or tachycardia are discussed in the following sections. Treatment of low cardiac output in the setting of structural heart disease or blood flow obstruction is beyond the scope of this chapter.
BRADYARRHYTHMIA
Bradycardia is common in older patients even without apparent cardiovascular disease. With advancing age, the number of cardiac myocytes declines, while residual myocytes enlarge with concurrent increased elastic and collagenous tissue in the interstitial matrix and conduction system. In addition to these age-related structural changes, prolongation of cellular action potential duration and diminished autonomic response further increase the propensity for bradycardia. Clinical bradycardia can be categorized by sinus node dysfunction (SND) and atrioventricular conduction block (AVB).
Sinus Node Dysfunction
Sinus node dysfunction (SND), historically known as sick sinus syndrome (SSS), is related to age-dependent progressive fibrosis of sinus nodal tissue, and surrounding atrial myocardium and hence, occurs more commonly in older patients. Extrinsic causes include myocardial ischemia or infarction, infiltrative diseases, collagen vascular disease, surgical trauma, endocrine abnormalities, autonomic effects, and neuromuscular disorders. Patients with SND may present with persistent sinus bradycardia, sinus arrest, or sinoatrial exit block (Figure 77-1A-D). The severity of symptoms such as lightheadedness, exercise intolerance, presyncope, or syncope generally correlates with the heart rate or the pause duration. In older patients with SND, paroxysmal atrial tachycardia (AT) or atrial fibrillation (AF) often are concurrently present (tachy-brady syndrome).
FIGURE 77-1A. Sinus bradycardia (sinus rate < 60 bpm). In this telemetry tracing, the heart rate is 42 bpm.
FIGURE 77-1B. Sinus arrest of 4.2 seconds in a patient with paroxysmal atrial fibrillation/flutter and sinus node dysfunction.
FIGURE 77-1C. Sinoatrial exit block, type I. There is progressive shortening of P-P interval before the absence of the next P wave.
FIGURE 77-1D. Sinoatrial exit block type II. The P-P interval is constant before the absence of the next P wave. The pause, due to the absence of the next P wave (denoted by the red arrow), is exactly twice the previous P-P interval.
The benefit of PPM is to relieve symptoms and to improve quality of life (QOL) in patients with SND. Before PPM is considered, transient reversible causes should be corrected. The synopsis on indications and selection of PPM for SND from 2018 ACC/AHA/HRS bradyarrhythmia guideline can be
found in Table 77-3. Based on randomized studies comparing atrial-based pacing (single-chamber AAI or dual-chamber DDD) versus single-chamber ventricular-based pacing (VVI), the incidence of AF is higher in ventricular- based pacing. The ventricular-based pacing causes pacemaker syndrome due to uncoordinated atrial and ventricular depolarization leading to valvular regurgitation and heart failure symptoms. However, for patients with symptomatic SND that is infrequent or in those who are frail/bedridden with limited functional capacity or unfavorable prognosis (survival <1 year), single-chamber ventricular pacing could be considered to reduce complications related to the pacemaker implantation. When single ventricular pacing is deemed appropriate, a leadless pacemaker could be considered in selected patients. A standard dual-chamber PPM is shown in Figure 77-2A, B; a contemporary single-chamber leadless PPM is shown in Figure 77-3A, B.
TABLE 77-3 ■ INDICATIONS AND SELECTION OF PACEMAKER (PPM) THERAPY IN SINUS NODE DYSFUNCTION (SND)
FIGURE 77-2A. Chest X-ray AP view showing dual-chamber pacemaker with right atrial and right ventricular leads (green arrow indicates the atrial lead and red arrow indicates the right ventricular lead).
FIGURE 77-2B. Chest X-ray lateral view showing dual-chamber pacemaker with right atrial and right ventricular leads (green arrow indicates the atrial lead and red arrow indicates the right ventricular lead).
FIGURE 77-3A. Chest X-ray AP view showing a leadless pacemaker (arrowed).
FIGURE 77-3B. Chest X-ray lateral view showing a leadless pacemaker (arrowed).
Atrioventricular Conduction Block
Atrioventricular conduction block (AVB) is mostly degenerative in nature due to fibrosis in the conduction system including the AV node, His bundle, bundle branches, and Purkinjie to myocardium connection. There are three degrees of AVB (first degree, second degree with Mobitz type I or type 2, and third degree AVB). The ECG characteristics of AVB are shown in Figure 77-4A-F.
FIGURE 77-4A. First-degree AVB (P waves associated with 1:1 atrioventricular conduction and a PR interval > 200 ms). In this figure, PR interval is 460 ms.
FIGURE 77-4B. Mobitz 1 AVB (P waves with a constant rate with a periodic single nonconducted P wave associated with progressive prolongation of PR interval before the nonconducted P wave –Wenckebach phenomenon).
FIGURE 77-4C. Mobitz type II AVB (P waves with a constant rate with a periodic single nonconducted P wave associated with constant PR interval before and after the nonconducted P wave).
FIGURE 77-4D. Complete AVB (P waves with constant rate and QRS complexes with constant rate without evidence of AV conduction).
FIGURE 77-4E. 2:1 AVB (P waves with a constant rate where every other P wave conducts to the ventricles) with left bundle branch block (LBBB).
FIGURE 77-4F. Holter monitoring tracing showing high-grade AVB (≥ 2 consecutive P waves at a constant physiologic rate that do not conduct to the ventricles).
In patients with AVB, transient reversible causes should be corrected before PPM is considered. A synopsis on indications and selection of PPM for AVB from the 2018 ACC/AHA/HRS bradyarrhythmia guideline can be found in Table 77-4.
TABLE 77-4 ■ INDICATIONS AND SELECTION OF PACEMAKER (PPM) THERAPY IN ATRIOVENTRICULAR BLOCK (AVB)
Physiologic Pacing (Cardiac Resynchronization Therapy)
RV pacing has been associated with negative physiologic and clinical consequences from ventricular dyssynchrony such as left ventricular (LV) chamber enlargement, worsening functional mitral regurgitation (MR), reduced left ventricular ejection fraction (LVEF), and increased inter and intraventricular dyssynchrony. The risk of RV pacing induced cardiomyopathy increases with increased RV pacing. The Mode Selection Trial showed that RV pacing greater than or equal to 40% of the time led to a 2.6-fold increase in HF hospitalizations. ACC/AHA/HRS guidelines on bradycardia (2018) suggest that it is reasonable to choose pacing methods that maintain physiologic ventricular activation such as cardiac resynchronization therapy (CRT) for patients with AVB who have LVEF less than 50% and are expected to require ventricular pacing more than 40% of the time. If ventricular pacing is expected to be less than 40%, it is reasonable to choose the conventional RV pacing.
In addition to the CRT in patients with AVB, CRT is indicated in patients with heart failure with New York Heart Association (NYHA) functional class II, III, and ambulatory class IV with left bundle brunch block (LBBB). Indications for CRT in patients with heart failure are summarized in Table
77-5. In patients meeting indication for CRT, clinical trials have consistently shown improvement of heart failure symptoms, exercise capacity, and survival.
TABLE 77-5 ■ INDICATIONS FOR CARDIAC RESYNCHRONIZATION THERAPY (CRT)
Other physiologic pacing methodology is evolving such as HIS bundle pacing (HBP) and left bundle brunch area pacing (LBBAP). HIS bundle pacing can be considered to maintain physiologic ventricular activation for selected patients with AVB. Long-term outcome data pertaining to the older
population are required for future guidelines. Chest X-rays of CRT PPM are shown in Figure 77-5A, B.
FIGURE 77-5A. Chest X-ray AP view showing CRT pacemaker with right atrial, right ventricular, and coronary sinus leads (the green arrow indicates the atrial lead; the red arrow right ventricular lead; and the blue arrow coronary sinus lead also known as left ventricular lead).
FIGURE 77-5B. Chest X-ray lateral view showing CRT pacemaker with right atrial, right ventricular, and coronary sinus leads (the green arrow indicates the atrial lead; the red arrow right ventricular lead; and the blue arrow coronary sinus lead also known as left ventricular lead).
Indications for PPM After Transcatheter Aortic Valve Replacement Transcatheter aortic valve replacement (TAVR) is being increasingly performed in the older population (see Valvular Heart Disease, Chapter 75). Acquired AVB following TAVR commonly occurs. Predictors of PPM implantation are preexisting RBBB, increased left ventricular end-diastolic diameter, increased valve prosthesis to left ventricular outflow tract ratio, and new LBBB. Incidence of new LBBB is 19% to 55% and high-degree AVB is 10% after TAVR; however, half of these may resolve before discharge. PPM is indicated before discharge for patients with new and symptomatic AVB associated with hemodynamic instability. Indications for pacing in patients with persistent LBBB without symptoms are evolving.
Studies have shown in up to 30% of patients with new LBBB, the first episode of high-degree AVB occurs after discharge with potential risk for syncope. Careful surveillance for bradycardia after discharge is recommended for those who develop prolonged PR interval or new BBB after TAVR.
PPM Management Near End of Life
Conversations related to end-of-life PPM management should be discussed at the time of implantation or at early stage of terminal illness. Patients should be encouraged to complete advanced directive early on to address device management and deactivation when patient becomes terminally ill. Like any other decision to withdraw treatments, the decision to deactivate PPM can be made by patient or the legal surrogate through shared decision-making process together with the physician. The role of physician is to inform patient, surrogate, and family member of the consequences of deactivating the PPM. Death may immediately follow PPM deactivation in those who are dependent. Those who are not must be monitored for potential symptoms such as respiratory distress rendering intensification of comfort measures.
After shared decision-making, written request to deactivate the PPM is required by the physician along with a do-not-resuscitate (DNR) order. Medical, ethical, and legal guiding principles on deactivation of PPM can be found in the Heart Rhythm Society 2010 Consensus Statement on this topic.
TACHYARRHYTHMIA
Atrial Fibrillation
Atrial fibrillation (AF) is the most common arrhythmia in the older population as its prevalence and incidence increase with age. Common symptoms of AF include palpitations, fatigue, lightheadedness, shortness of breath, chest discomfort or intolerance to activities, and severe symptoms including HF and syncope. Diagnostic studies involve ECG and echocardiogram to evaluate chamber size, valvular function, and filling pressures; laboratory tests such as complete blood count, comprehensive metabolic panel, thyroid function, and in select cases, cardiac biomarkers. If clinically warranted, ischemic evaluation or sleep study should be considered. Management of AF includes rate or rhythm control, stroke prevention, and modification of risk factors such as weight loss, blood pressure and diabetes control, reduction of alcohol consumption, diagnosis and treatment of obstructive sleep apnea (OSA), and regular exercise. The principles of AF management are the same between the younger and older population. However, the older population has higher prevalence of comorbidities in addition to higher stroke and bleeding risk, which makes them less tolerant to medications; they also have a lower success rate for
interventional procedures and higher incidence of adverse events or complications.
Rate versus Rhythm Control
In the older population, neither rate nor rhythm control strategies by pharmacologic therapy was proven to be superior based on several studies including the AFFIRM (Atrial Fibrillation Follow-up Investigation of Rhythm Management) trial. Rate control appears safer with same efficacy to rhythm control in older patients especially if they are asymptomatic or only mildly symptomatic. Long-term rhythm control relies on antiarrhythmic drugs (AAD) or catheter ablation (CA). However, AAD are associated with increased incidence of adverse events in older patients due to variable pharmacokinetics, pharmacodynamics, drug interactions from polypharmacy, and impaired renal or hepatic function. In the AFFIRM study, subgroup analysis of population older than 70 years showed that AAD therapy was associated with higher all-cause mortality. Rate control is generally preferred in older patients with mild or no symptoms. Rhythm control is preferred in older patients with symptoms associated with AF.
Rate Control
β-Blockers (BB) and non-dihydropyridine calcium channel blockers (CCB) slow down the AV nodal conduction and are first-line agents for rate control. The optimal heart rate goal is between 80 and 110 bpm in the absence of significant symptoms such as palpitations, shortness of breath, lightheadedness, chest discomfort, or signs of HF. Although less effective, digoxin can also be considered for rate control, but caution must be taken due to narrowed therapeutic index, especially in the setting of unstable renal function. Rapid up-titration or use of higher doses of BB and CCB can be associated with adverse events related to hypotension, attenuation of baroreceptor reflex, impaired conduction, or autonomic function. CCB are related to higher mortality in patients with LVEF less than 40%. In patients with severe symptoms and in whom drug therapy fails for rate control, ablation of the AVN followed by implantation of PPM are effective for controlling ventricular rate. Although ablation of the AVN does not eliminate AF or the need for anticoagulation, it relieves symptoms and improves QOL, exercise tolerance, and LVEF in patients with tachycardia-induced cardiomyopathy. Biventricular pacing after AVN ablation is associated with
improved 6-minute walk distance compared with conventional RV apex pacing in patients with preexisting HF symptoms.
Rhythm Control
Rate control alone may be insufficient to alleviate symptoms in some older patients. In those select patients, restoration of sinus rhythm can be beneficial. Approaches to rhythm control includes AAD therapy, catheter ablation (CA), surgical ablation, and direct current electrical cardioversion (DCCV). In the acute setting, the efficacy of intravenous and oral AAD is highly variable, ranging from 30% to 75%. Efficacy also varies with the age of the patient, duration of the arrhythmia, underlying LVEF, and left atrial size. External DCCV can restore sinus rhythm in 75% to 90% of patients with AF. For maintenance of sinus rhythm in patients with recurrent AF, consideration for AAD can be given. AAD therapy for rhythm control is summarized in Table 77-6.
TABLE 77-6 ■ SUMMARY OF ANTIARRHYTHMIC DRUG (AAD) THERAPY FOR RHYTHM CONTROL IN ATRIAL FIBRILLATION (AF)
Role of Ablation
It has been shown in multiple randomized controlled trials that catheter ablation of AF is safe and superior to AAD in preventing recurrence of AF and maintaining sinus rhythm. The recent CABANA (Catheter Ablation
versus Antiarrhythmic Drug Therapy in Atrial Fibrillation) trial (n = 2204 patients randomized to either catheter ablation or drug therapy) showed that catheter ablation was not superior to AADs for the primary endpoint of all cause of mortality, disabling stroke, serious bleeding, or cardiac arrest in 5 years. As a predefined secondary endpoint, catheter ablation did not reduce all-cause mortality alone but did reduce the combined all-cause death or cardiovascular hospitalization in comparison to AAD. AF recurrence was significantly less in patients randomized to catheter ablation. A recent metanalysis reported that catheter ablation resulted in a significant reduction in all-cause mortality in AF patients with HF and reduced EF and resulted in significantly fewer cardiovascular hospitalizations and fewer AF recurrences. The subgroup analyses from CABANA suggested that younger patients (age < 65 years) and men derived more benefit from catheter ablation compared with AAD in patients with HF. Such benefit was not evident in patients older than 75 years. Therefore, age and comorbidities are important factors in the decision-making for catheter ablation. Guidelines derived from clinical evidence recommend catheter ablation treatment is primarily for reduction of symptoms and improvement of QoL. There are currently no compelling data to support the use of catheter ablation to reduce risk of stroke, especially in patients with high CHA2DS2-VASc score.
Cox maze procedure is another strategy of AF ablation that can be considered in patient with symptomatic AF undergoing cardiac surgery. The standalone surgical ablation with minimally invasive techniques is rapidly evolving. Limited data suggest it is reasonable for patients with persistent or long-standing persistent AF and paroxysmal AF who have failed one or more attempts of catheter ablation.
Stroke Prevention
The risk of stroke is five times higher in patients with AF. Anticoagulant therapy reduces stroke risk and mortality associated with AF. Although older patients are more vulnerable to stroke, less than two-thirds of octogenarians with AF are anticoagulated. One of the challenges is to maintain INR within the therapeutic range. The need for anticoagulation is determined by the validated CHA2DS2-VASc score (Table 77-7A). Risk of bleeding on
anticoagulation is commonly assessed by the HAS-BLED score with a 0–2 score reflecting low, and a 3–9 score high risk (Table 77-7B). Advanced age is a major risk factor for stroke and for bleeding. Although stroke and
bleeding risk coexist, benefits of anticoagulation outweigh bleeding risk in most scenarios. Before initiation of anticoagulation, a thorough assessment of frailty, cognitive function, life expectancy, polypharmacy/drug interaction, nutritional assessment, liver function, and renal function is warranted to ensure individualized optimal approach.
TABLE 77-7 ■ SCORING SYSTEMS FOR STRATIFICATION OF STROKE AND BLEEDING RISK IN ATRIAL FIBRILLATION (AF)
The available anticoagulation drugs include vitamin K antagonist (warfarin) and direct oral anticoagulants (DOACs). DOACs include factor Xa inhibitors (apixaban, rivaroxaban, edoxaban) and direct thrombin inhibitor (dabigatran). The disadvantages of warfarin therapy include the need for monitoring INR, narrow therapeutic range, interaction with different food and medications especially in the setting of polypharmacy in older patients, and there is increased tendency for unintentional overdose due to inter- or intraindividual variability in pharmacokinetics and pharmacodynamics. There have been four randomized controlled trials comparing DOACs with warfarin. Patients 75 years or older represent almost 40% of the population in these trials. There was consistent evidence of at least noninferiority for the combined endpoint of stroke or systemic embolism and superior safety profile with less intracranial bleeding risk compared to warfarin. DOACs are recommended as first-line therapy for stroke prevention in eligible patients with AF. For patients 75 years or older, there is lower risk of major bleeding especially with apixaban and edoxaban. However, full-dose dabigatran and rivaroxaban are significantly associated with an increased risk of gastrointestinal (GI) bleeding. A proton pump inhibitor is recommended when these two DOACs are used.
A bleeding risk assessment using the HAS-BLED score has been shown to be clinically useful. Older patients with high-risk bleeding should be followed up frequently with routine labs such as cell count, liver, and renal function tests. With the use of DOAC, renal function should be evaluated before initiation and should be reevaluated at least annually or every 6 months or more frequently in those with renal insufficiency. To reduce the bleeding risk, modifiable risk factors must be addressed such as reduction of alcohol use, proper blood pressure control, and avoidance of NSAIDs and elimination of antiplatelet agents if possible. DOACs are contraindicated in advanced liver disease or liver failure with coagulopathy and should not be used in patients with Child-Pugh class C cirrhosis (Child-Pugh class B for rivaroxaban due to a more than a twofold increase in drug exposure). In patients with severe thrombocytopenia (< 50000/μL), the anticoagulation should be individualized and closely monitored given the lack of evidence from trials.
For those who have indication for anticoagulation but has contraindication for chronic anticoagulation but are able to tolerate short- term warfarin therapy, the percutaneously inserted left atrial appendage (LAA) occlusion device, the Watchman device, has been approved by FDA. Anticoagulation with warfarin to target INR between 2 and 3 is indicated for 45 days after the Watchman implantation and then discontinued if complete closure of LAA is confirmed by transesophageal echocardiogram (TEE).
After warfarin, aspirin and clopidogrel are recommended for 6 months followed by long-term aspirin. For those with high bleeding risk, clopidogrel can be used for 6 months along with long-term aspirin therapy without oral anticoagulation. The surgical LAA amputation can also be considered for patients with AF undergoing cardiac surgeries.
The synopsis of recommendations for stroke prevention in AF is described in Table 77-8 and summary of DOACs can be found in Table 77- 9.
TABLE 77-8 ■ STROKE PREVENTION IN PATIENTS WITH ATRIAL FIBRILLATION (AF)
Sh.ar d ,d cisio•n-1n.aking (·isk of t1•,ok vs bl ·eding,.valu sו
a11d pv fer.nce )
, CHA2DS2-VA c sco1·e st1·oke risk st1·at1ficatio11 for rדonval
vuJar AF
Wa1·far1:n for גsivalv11lar A regardless ofי HA D -VA c score
2 2
(ta1·get I . R 2-3)
CH�DS2-VASc�·2 in men o· :2: 3 in.vvoנ11en
ד anticoagulatio11
DOAC�,ove warfa1·i11for *nonvalvular AF (mo·ni ס re11al
fu11ction)
, Switch to DOAC if una'ble to 111ai11tain therapeL1t1c I R with.
warfa1·i11.
, Varfa1·in 0,1·apixabaז.1 (o,1.1' ly fo1· nonvalvL1la1· AF) wl1e1.1 Cr 1
< 15 נnL/mi11
Reduc DOAC ,do in modera·te-to- ev re CKD ( eru:m c ati11ine � 1.5 mg/d [apixaban]� CrCl 15 to 30 1nL/נ.1וin
[dab•igatנ·a11], יCrCl::;;501nUmin [rivaroxaban], or CrCl !5
to,50 mL/mi11 [ doxab,an])
Percutan ou LAA occlusion 01· conti·aindica ions to,long ter1n anticoagula·ti,on
Su1·gica.l occlusion of the LAA for 1ו:onvalvula1· AF patients undergoi11g ca1·d1ac surge;1·y
o .dabigatג·a:nי riva1·,oxabaזן) 0,1· edoxabaג1 :iזן e:rןd-stage CKD
be used in p.a ien·t
01· dialysis
DOיAC lר:ould .11:ot
*Definitions Valvular AF
wi h1nechanical valve
VaJvul-.1· AF refer to,AF in th setting of mסd· rate "to Vי r
n1i·tral st נוosi.(M. ) o•r pre enc of m chanical valve.
Nonvalvular AF
t ·to sev:ere M ס , absence o,f 1n chani
onvalvular AF is an AF in the ttilוg o abs nc of mod· -
1 valve.
TABLE 77-9 ■ SUMMARY ON DIRECT ORAL ANTICOAGULANTS (DOACS)
Cryptogenic Stroke, Embolic Stroke of Undetermined Source, and Atrial Fibrillation
The cause of embolic stroke may not be apparent in approximately 30% of patients many of whom are older with multiple comorbidities. Two randomized trials, NAVIGATE ESUS (Rivaroxaban Versus Aspirin in Secondary Prevention of Stroke and Prevention of Systemic Embolism in Patients with Recent Embolic Stroke of Undetermined Source) and RE- SPECT EUS (Dabigatran Etexilate for Secondary Stroke Prevention in Patients With Embolic Stroke of Undetermined Source), studied empiric anticoagulation with rivaroxaban and dabigatran, respectively, on the impact of recurrent stroke in ESUS patients comparing to aspirin. These studies found that empiric anticoagulation was not associated with lower rates of stroke recurrence than aspirin. Bleeding complications were higher with anticoagulation. Empirical anticoagulation in patients with cryptogenic stroke (CS) or embolic stroke of undetermined source (ESUS) is currently not recommended. In patients with CS and ESUS, long-term surveillance for subclinical AF can be accomplished by implantation of a cardiac monitor
(loop recorder). Anticoagulation can be considered after AF is detected by the loop recorder.
Device Detected High-Rate Atrial Episodes
Older patients without a history of AF frequently have implanted cardiac devices such as a pacemaker or defibrillator with capabilities of continuous rhythm monitoring. Some patients have device detected intermittent atrial high-rate events (AHREs) with or without symptoms. Most devices detect AHREs when atrial rates exceed 180 to 190 bpm. An association between increased risk of stroke or systemic embolism and AHREs has been consistently observed. AHREs lasting a minimum of 5 to 6 minutes have been associated with an increased risk of ischemic stroke, cardiovascular events, and death. AHREs should prompt a careful review of the documented electrograms to confirm AF or to consider additional ambulatory monitoring if the data from the implanted device are equivocal. The data on the correlation between the risk of thromboembolic complications and AF burden (frequency, duration, and pattern) continue to evolve rapidly. At this time it is generally recommended that anticoagulation therapy should be considered when AF is confirmed in patients with AHRE greater than 6 minutes and CHA2DS2-VASc > 2 or > 24 hours with CHA2DS2-VASc > 1 after goals and risks of long-term anticoagulation are reviewed with the patient.
Atrial Flutter
Typical atrial flutter (AFL) is a reentrant tachycardia utilizing the inferior vena cava-tricuspid isthmus as the critical conduction pathway. It commonly coexists with AF. Rate control and cardioversion can be effective (electrical cardioversion or with use of class III antiarrhythmic). Ablation of cava- tricuspid isthmus is effective associated with greater than 90% to 95% success rate and less than 1% to 2% complications. Management for stroke prevention in patients with AFL is similar to patients with AF.
Supraventricular Tachyarrhythmia
Supraventricular tachyarrhythmia (SVT) is less common in older patients since most SVTs have been ablated when patients were young. AV nodal reentrant tachycardia (AVNRT) localized to the region of AV node is the most common type of SVT identified among older patients, followed by atrial
tachycardia and atrioventricular reciprocating tachycardia (AVRT). The principles of drug and nondrug management of SVT are similar between younger and older patients as recommended in the ACC/AHA/HRS Guidelines 2015 (a synopsis is provided in Table 77-10). β-Blockers and CCBs are considered the first line of therapy for treating SVTs. Class I and III antiarrhythmic agents are effective for treating SVTs. Catheter ablation therapy is highly effective for the treatment of SVTs, even in older patients. Success rates for catheter ablation of SVT, regardless of age, range from 85% to better than 95%, depending primarily on the nature of the arrhythmia and the experience of the operator. The incidence of major complications associated with SVT ablation is less than 2% to 3%.
TABLE 77-10 ■ TREATMENT RECOMMENDATIONS FOR SUPRAVENTRICULAR ARRHYTHMIA (SVT)
Vagal ma11euv r for a.c.ut svדי( .ducatio,n fo1·futu1·
pi od s r comm nd ,d)
, Aden,osine fo,r a.cute SVT
, •ynch1·011ized car,dioversioזו for acLte T with.berזגody- n.amic in.stabi' l"ity
Synchr,onize.d ca1·diove1·sion for acut•SVT 1· fra tory to ad nosine
Oי1· l �-b•locker ,or dil.tiaz m/ erapa111i] for ongoing VT
, EP•. w1th opt:io11 of ablation ·for diagnosis,and treatme11t
of \TT
Ib,utilid or p1·ocai11 111ide IV for pr -excitedl VT
IV �-block 1· ,or d.iltiaz.em cJ1 11nel bl k r for acute VT
, IV Aזnioda1·0,ne fo,r sta,ble acטte AV ;T
Flecainide 0.1· pro,pafיe.11one·for ongoing ma"nage11נ.e:nt of S .T witl1 n,o strטctu1·.al hea.rt d'• eas or ische 11'a (if ablation not prefi 1·•• d)
Pil1 in pocke �-blocker�diltiazem) 01· eז·apaזnil for w 11- tolerated AVNRT
o·ta1o] fo:1·011go.i11g 111anage1תeנ1·t o,f VT (if a,bla·tio11no preferred)
D,ofeיtilide tor ongoing managemen of SV.(when otl1e1· medica:tion a1·e not to,le1·ated}contraind.icated 01· not e fec tiv� and if abla:tion no,t pr ferr d)
Amiodaron foג· ong ing נnana:g·m nt of SVT (wh n •סth • m dica:t�ons including do:5 יtilld are ג ot to era _d�co11t1·ain d1cated or not effi ctiv. and if ab1atiס·n not prefe1·red)
, Digoxiננ for 011goנ11g 1מanage1ner1t of SVT
�,-Blocker, diltiazemt verapamii arזriodarone, aזוd digoxio are ha1·1nful for pre-excitatio11
Tr a,m nt of "\TT hould b,-i11divi.duali�d in patlents older
·tl1an 75 years to incorpo,rat age,.co,morbid illnessו pl1ysical
and cognitive fu11ctlor1sו patient prefe1·e11ces�and severity o,f
sy1nptoms
Ventricular Tachyarrhythmia
The management of ventricular arrhythmias in older patients is similar to that in younger patients. In patients with asymptomatic nonsustained ventricular tachyarrhythmia (NSVT), a careful evaluation for the presence of cardiac disease, including occult CAD, structural heart disease, and left ventricular dysfunction, is required. Premature ventricular contractions and NSVT are associated with a benign prognosis in the absence of any significant heart disease. The risk of SCD is increased in patients with compromised LVEF, whether due to ischemic or nonischemic heart disease. Preventing SCD entails optimizing therapy directed at the underlying disease and the use of ICD therapy in selected patients. Although none of the indications for ICDs exclude or allude to special considerations in older patients, individual assessment and determination of primary therapeutic objectives are particularly pertinent in this population because comorbid medical illnesses are frequently present and life expectancy is shorter in older patients.
Prevention of Sudden Cardiac Death
Indications for implantation of an ICD for primary SCD prevention include:
(1) ischemic cardiomyopathy (> 40 days after MI or > 90 days after revascularization), LVEF of less than or equal to 35% with NYHA class II- III function class, or LVEF less than 30% with NYHA class I functional class on guideline-directed medical therapy (GDMT); (2) nonischemic cardiomyopathy, LVEF less than or equal to 35% with NYHA class II-III despite GDMT (benefit in patients with NYHA class I is not well established); (3) inducible sustained monomorphic ventricular tachyarrhythmia by electrophysiologic study (EPS) in patients with NSVT and EF less than or equal to 40% after MI. Indications for implantation of an ICD for secondary SCD prevention include: (1) cardiac arrest owing to VT or ventricular fibrillation (VF) not related to a transient or reversible cause (eg, acute myocardial infarction), (2) spontaneous sustained VT in association with structural heart disease, and (3) syncope of undetermined origin with clinically relevant, hemodynamically significant sustained VT or VF induced at EPS when drug therapy is ineffective, not tolerated, or not preferred. Randomized, prospective clinical trials comparing AAD therapy to ICD have demonstrated the usefulness of ICD in reducing the risk of SCD and total mortality for both primary and secondary prevention in selected populations.
Advanced age alone should not be a sole limiting factor for ICD implantation. However, data on survival benefit are limited from ICD trials in octogenarians and nonagenarians—the very old patient population.
Individualized and shared decision with very old patients is important based on patient’s preference, functional capacity, cognitive function, and underlying comorbidities.
ICD Management Near End of Life
During the shared decision-making process for initial ICD implantation, risks and benefits of device implantation and possible consequences of ICD therapy and shocks should be thoroughly discussed with the patient, family members, or caretakers. Studies showed that patients frequently do not completely understand the risks, benefits, and downstream burdens of their ICDs. Especially at the end of life, these repetitive shocks may cause additional distress to both patients and loved ones. Each patient or legal surrogate should be informed that they have a right to deactivate the ICD when the end-of-life decision is appropriate.
SUMMARY
The genesis of arrhythmia is the result of complex interactions amongst aging-related physiologic changes, disease-dependent substrate, risk factors, and genetic predisposition. Guidelines on the treatment of cardiac arrhythmias continue to evolve and are being updated regularly. Although many well-designed clinical trials have provided strong evidence for our clinical practice, these evidence-based recommendations need to be interpreted with caution in older patients (especially octogenarians and nonagenarians) as these patients are frequently under-represented in clinical trials. Perhaps more so than survival rates, clinical outcomes such as symptoms, quality of life, functional capacity, independent living, and hospitalization need to be critically addressed when treating arrhythmias in this fastest growing segment of our population.
FURTHER READING
Al-Khatib SM, Stevenson WG, Ackerman MJ, et al. 2017 AHA/ACC/HRS Guideline for Management of Patients With Ventricular Arrhythmias and
the Prevention of Sudden Cardiac Death: A Report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines and the Heart Rhythm Society. J Am Coll Cardiol. 2018;72:e91–e220.
Asad ZUA, Yousif A, Khan MS, Al-Khatib SM, Stavrakis S. Catheter ablation versus medical therapy for atrial fibrillation: a systematic review and meta-analysis of randomized controlled trials. Circ Arrhythm Electrophysiol. 2019;12:e007414.
Calkins H, Hindricks G, Cappato R, et al. 2017 HRS/EHRA/ECAS/APHRS/SOLAECE expert consensus statement on catheter and surgical ablation of atrial fibrillation. Heart Rhythm.
2017;14:e275–e444.
Connolly SJ, Ezekowitz MD, Yusuf S, et al. Dabigatran versus warfarin in patients with atrial fibrillation. N Engl J Med. 2009;361:1139–1151.
Friedman DJ, Piccini JP, Wang T, et al. Association between left atrial appendage occlusion and readmission for thromboembolism among patients with atrial fibrillation undergoing concomitant cardiac surgery. JAMA. 2018;319:365–374.
Giugliano RP, Ruff CT, Braunwald E, et al. Edoxaban versus warfarin in patients with atrial fibrillation. N Engl J Med. 2013;369:2093–2104.
Goyal P, Maurer MS. Syncope in older adults. J Geriatr Cardiol.
2016;13:380–386.
Goyal P, Rich MW. Electrophysiology and heart rhythm disorders in older adults. J Geriatr Cardiol. 2016;13:645–651.
Granger CB, Alexander JH, McMurray JJ, et al. Apixaban versus warfarin in patients with atrial fibrillation. N Engl J Med. 2011;365:981–992.
Hindricks G, Potpara T, Dagres N, et al. 2020 ESC Guidelines for the diagnosis and management of atrial fibrillation developed in collaboration with the European Association for Cardio-Thoracic Surgery (EACTS): the Task Force for the diagnosis and management of atrial fibrillation of the European Society of Cardiology (ESC) developed with the special contribution of the European Heart Rhythm Association (EHRA) of the ESC. Eur Heart J. 2020;42:373–498.
January CT, Wann LS, Calkins H, et al. 2019 AHA/ACC/HRS Focused Update of the 2014 AHA/ACC/HRS Guideline for the Management of Patients With Atrial Fibrillation: A Report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice
Guidelines and the Heart Rhythm Society in Collaboration With the Society of Thoracic Surgeons. Circulation. 2019;140:e125–e151.
Kusumoto FM, Schoenfeld MH, Barrett C, et al. 2018 ACC/AHA/HRS Guideline on the Evaluation and Management of Patients With Bradycardia and Cardiac Conduction Delay: A Report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines and the Heart Rhythm Society. J Am Coll Cardiol. 2019;74:e51–e156.
Ntaios G. Embolic stroke of undetermined source: JACC review topic of the week. J Am Coll Cardiol. 2020; 75:333–340.
Packer DL, Mark DB, Robb RA, et al. Effect of catheter ablation vs antiarrhythmic drug therapy on mortality, stroke, bleeding, and cardiac arrest among patients with atrial fibrillation: the CABANA Randomized Clinical Trial. JAMA. 2019;321:1261–1274.
Page RL, Joglar JA, Caldwell MA, et al. 2015 ACC/AHA/HRS Guideline for the Management of Adult Patients With Supraventricular Tachycardia: Executive Summary: A Report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines and the Heart Rhythm Society. Circulation. 2016;133:e471– 505.
Patel MR, Mahaffey KW, Garg J, et al. Rivaroxaban versus warfarin in nonvalvular atrial fibrillation. N Engl J Med. 2011;365:883–891.
Schäfer A, Flierl U, Berliner D, Bauersachs J. Anticoagulants for stroke prevention in atrial fibrillation in elderly patients. Cardiovasc Drugs Ther. 2020;34:555–568.
Shen W-K, Sheldon RS, Benditt DG, et al. 2017 ACC/AHA/HRS Guideline for the Evaluation and Management of Patients With Syncope. A Report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines and the Heart Rhythm Society. J Am Coll Cardiol. 2017;70:e39–e110.
Sutton R, de Jong JSY, Stewart JM, Fedorowski A, de Lange FJ. Pacing in vasovagal syncope: physiology, pacemaker sensors, and recent clinical trials—precise patient selection and measurable benefit. Heart Rhythm. 2020;17:821–828.
Undas A, Drabik L, Potpara T. Bleeding in anticoagulated patients with atrial fibrillation: practical considerations. Pol Arc Intern Med. 2020;130:47– 58.
Chapter
78
Peripheral Vascular Disease
Jonathan R. Thompson, Jason M. Johanning
Peripheral vascular disease (PVD) is primarily a disease of aging and is strongly associated with impaired quality of life and increased cardiovascular mortality. The average age of patients seeking treatment is approximately 70 years. Various studies document a 15% to 20% prevalence rate over the age of 70 years. With the increasing age of the population, the diagnosis and treatment of PVD will become a priority. A working knowledge of the most common sites of disease, the initial diagnostic tests, options for treatment, and treatment outcomes in the geriatric population are necessary to provide optimal guidance for these patients. This chapter is organized by the most commonly encountered arterial and venous diseases of the geriatric patient. Although each subset of PVD has its unique presentation, the underlying atherosclerotic process is a systemic disease and should be treated similar to coronary atherosclerotic disease with regard to risk factor management (see Chapter 74 for additional information).
PERIPHERAL ARTERIAL DISEASE
Definition
Lower extremity arterial disease is commonly referred to as peripheral arterial disease (PAD). As a whole this disease encompasses atherosclerotic narrowing of arteries from the infrarenal aorta to the level of the tibial arteries at the foot.
Epidemiology
Risk factors for PAD are similar to those of coronary artery disease and include smoking history, advanced age, male gender, and positive family
history. The prevalence of PAD increases with increasing age with up to 20% of people older than 75 years having some form of lower extremity arterial disease, although classic claudication symptoms are present in less than half of these individuals.
Presentation
The presentation of PAD includes a range of symptoms. The most common presentation is that of claudication, or cramping of the lower extremity muscles after walking a fixed distance. The cramping or aching is primarily in the calves and buttocks and is relieved within 10 minutes of cessation of activity. The above classic presentation of claudication has unfortunately been shown to be present in less than half of patients with documented PAD and leg symptoms. Therefore, in the geriatric population, one must have a high index of suspicion for PAD as an underlying cause of ambulatory difficulties and leg symptoms. Additionally, in the older patient, coexisting conditions are common. The two most common conditions are osteoarthritis of the hip or knee and neurogenic claudication secondary to spinal stenosis. Osteoarthritis generally localizes to the joint, improves with pain medications, and has a varying course of improvement and worsening throughout the day. Neurogenic claudication is the most difficult to differentiate from vasculogenic disease because spinal stenosis is common in the older population. Neurogenic claudication most commonly presents with pain in the calves and posterior thigh and buttocks. In contrast to vasculogenic disease, neurogenic claudication has variable distance to onset, often takes 15 minutes to over hours to resolve, and claudication distance can be significantly increased with use of an assistive device such as a walker on which the patient can lean over and relieve the pressure on the spinal nerves.
Learning Objectives
Obtain a working knowledge of the most common sites of peripheral vascular disease (PVD), the initial diagnostic tests, and options for treatment as well as their outcomes.
Understand the important role aging plays with regard to intervention in the PVD patient where the primary determination to intervene is based on risk-benefit ratio and the time to treatment equipoise.
Describe the key indications with regard to intervention for the most common arterial disease presentations including claudication, critical limb ischemia, symptomatic and asymptomatic carotid
artery stenosis, and abdominal aortic aneurysms (AAAs).
Understand the role of minimally invasive endovascular intervention in comparison to open vascular surgery.
Understand the key physiologic and nonphysiologic factors that affect surgical outcomes in vascular patients especially renal failure and functional status.
Understand the presentation of chronic venous insufficiency including diagnosis and new treatment modalities.
Key Clinical Points
Peripheral arterial disease (PAD) is a common clinical condition in older adults with up to 20% of people older than 70 years having some form of PAD.
PAD can be diagnosed utilizing a simple and accurate test named the ankle-brachial index (ABI).
The decision to intervene in a patient with claudication is a lifestyle choice and should be pursued only after a trial of exercise therapy has been performed.
Intervention for patients with asymptomatic carotid artery stenosis utilizing carotid artery stenting (CAS) is not currently indicated due to the significant risk of perioperative stroke.
Carotid endarterectomy is the generally accepted intervention for older patients with both asymptomatic and symptomatic carotid artery stenosis with CAS acceptable for patients with specific indications.
Intervention of patients with AAA is generally accepted when aneurysmal diameter exceeds 5 to 5.5 cm.
Outcomes for patients with acceptable anatomy for open or endovascular repair of infrarenal AAAs are similar based on current randomized trials, although short-term mortality appears to benefit patients undergoing endovascular repair.
Long-term follow-up of patients undergoing endovascular repair of AAA using computed tomographic scanning is currently
recommended based on changing morphology of the residual aneurysm.
Chronic venous disorders of the lower extremities are present in over 30% of the population and are generally treated first with graduated compression stockings.
Recent data support ablation of the saphenous vein as initial treatment in appropriate patients with venous stasis ulceration.
A focused history of pain with ambulation is usually sufficient to confirm or provide high suspicion for the diagnosis of claudication in the majority of patients and provide a differential as to arterial, spinal, or muscular/joint etiology. The patient with true vasculogenic claudication will complain of pain with ambulation that starts after a known, relatively fixed, distance (two to three blocks are common as this interferes with activities of daily living). Upon cessation of ambulation and rest, the pain will subside and upon resuming ambulation will reoccur at a similar distance. This cycle in vasculogenic claudication can be repeated indefinitely. Often all three conditions (arterial, spinal, or muscular/joint) can coexist in the older patient and the diagnosis which is the primary limiting condition is of utmost concern to achieve an optimal outcome and maintain ambulatory independence.
Although claudication secondary to arterial disease can cause the patient significant distress, the rate of disease progression to rest pain (severe ischemic pain due to insufficient arterial inflow), critical limb ischemia (gangrene, ulceration, or tissue loss), and subsequent amputation is low—on the order of 10% over 10 years or 2.5% annually. Thus, patients presenting to providers worried about amputation should be reassured that amputation is unlikely with appropriate noninterventional or nonoperative management strategies (Figure 78-1).
FIGURE 78-1. The natural history of patients with intermittent claudication including fate of the limb and the relationship to cardiovascular outcome. Note the benign nature of intermittent claudication with regard to the limb yet the high incidence of cardiovascular events. (Reproduced with permission from Weitz JI, Byrne J, Clagett GP, et al. Diagnosis and treatment of chronic arterial insufficiency of the lower extremities: a critical review. Circulation.
1996;94[11]:3026–3049.)
Evaluation
A thorough lower extremity examination includes inspection of the legs to assess for lesions consistent with arterial ischemia. Arterial lesions are primarily located on the distal toes or distal foot and tend to be painful.
Earlier presentation includes loss of hair on the toes and distal ankles. Palpation of pulses in the femoral, popliteal, and dorsalis pedis and posterior tibial distribution allows a gross determination of location of disease. Neuromotor functioning of the foot should be documented in the case of suspected acute or severe ischemia, as viability of the foot is determined not by pulse examination but retention of muscular and neurologic function.
The diagnosis of vascular disease relies heavily on the vascular laboratory. The initial and most important test is the ankle-brachial index (ABI). The ABI is a simple bedside examination that can be performed with a blood pressure cuff and handheld Doppler. It is defined as highest systolic blood pressure measured at each ankle divided by the highest systolic blood pressure measured in the arm (brachial artery).
The majority of the vascular system, including retroperitoneal vascular structures, can be imaged with high resolution using noninvasive techniques. Multiple radiologic studies can be used to assess the lower extremity arteries
including duplex ultrasonography (DUS), computerized tomographic angiography (CTA) or magnetic resonance imaging (MRA), and conventional digital subtraction angiography (DSA). Each has utility in specific situations and should not be used interchangeably. However, DSA is typically reserved for situations when an intervention is planned. Invasive procedures are not commonly used as the initial diagnostic study due to the improvements in resolution and lack of morbidity for noninvasive evaluations. The study of choice following an abnormal ABI varies based on historical use of specific modalities and is best ordered by the specialist treating the patient to avoid unnecessary tests and cost to the patient.
Management
Life style modification The most important initial management strategies for patients with PAD are smoking cessation and continued ambulation. It has been shown that among patients who are smokers who subsequently stop smoking, the risk of amputation becomes exceedingly low with stabilization of PAD progression and often improvement of walking distance. Also, patients should be encouraged to ambulate even if they experience pain.
There is no known negative impact of ambulation on the musculature. In fact, ambulation is the first-line treatment for patients presenting with symptomatic PAD. Supervised exercise therapy has been systematically shown to have a positive and sustainable improvement in ambulatory distances.
Exercise the rapy Marked improvements in walking distances have been demonstrated in virtually all studies examining exercise therapy with supervised exercise therapy providing consistently increasing walking distances compared to baseline. Exercise therapy is classically performed by having patients walk beyond the onset of pain for as long as they can safely tolerate and repeating a series of walking trials with each session lasting 30 minutes occurring three times a week. Patients should be reassured that walking to and through the onset of pain will not have any adverse effect on their legs or muscles but, to the contrary, this will promote and increase walking distance by promoting collateral vessel formation. On average, walking distance can be increased on the order of 50% to 200% with a formal exercise program, and in many patients this increase in distance will allow them to accomplish the tasks of daily living that prompted their presentation at the outset. Medicare and many insurance carriers will pay for up to 12 weeks of supervised exercise therapy, although these programs
generally only exist in large cities in conjunction with a robust cardiac rehab center. For those who don’t have access to these formal programs, patient instructions and even smart phone applications exist to help patients with an exercise program.
Pharmacologic Medical management should consist of an antiplatelet agent in conjunction with a high-potency statin agent. Antiplatelet agents may be prescribed; however, these agents have known and not insignificant side effects and contraindications. These agents have been variably shown to improve ambulation in patients with claudication; however, the maximal gain in walking distance is marginal compared to exercise therapy. Cilostazol, a phosphodiesterase II inhibitor, inhibits smooth muscle cell contraction and platelet aggregation. It is FDA approved for the treatment of intermittent claudication. Caution should be used when administering to a patient with heart failure. Cilostazol has the most data supporting its use for intermittent arterial claudication with walking distances improved by up to 50%.
Interventional PAD treatment using a percutaneous endovascular approach has become the preferred initial treatment for those patients with both lifestyle- limiting claudication and rest pain or tissue loss. The approach is usually from the femoral arteries for both iliac and femoral/tibial lesions, although new technologies have allowed radial artery and pedal access to minimize complications. Short focal stenoses respond very well to angioplasty and stenting whereas long segment stenosis and occlusions are more challenging to treat and have a reduced patency rate, approaching 50% at 6 months, depending on the modality of treatment. Although generally believed to be less durable than open surgical approaches, the endovascular approach offers fewer major complications compared to open surgery especially in the frail patient and can be repeated two to three times after the initial revascularization procedure while still maintaining the ability to perform open surgical bypass in the future. This minimally invasive approach has resulted in a significant decrease in the number of open surgical procedures in older patients and has also been associated with a concomitant reduction in number of amputations nationwide. As a geriatrician one must focus on the goals of care in the patient with rest pain or tissue loss where pain relief, infection treatment, and sustaining or improving ambulatory function are often the primary end points.
Open surgical bypass of occluded or stenotic segments still remains the gold standard against which percutaneous interventions are gauged. Bypass
surgeries using prosthetic or autogenous (vein) conduits are the most commonly performed procedures to provide pulsatile flow to the distal leg in the setting of ulcerated lesions or gangrene. The downside to open surgical revascularization is the definite risk of mortality and morbidity that accompanies these procedures. Contemporary quality databases have demonstrated that patients with advanced age greater than 80, renal failure, chronic obstructive pulmonary disease (COPD) requiring oxygen, and congestive heart failure are additive risk factors. Thus in high-risk patients, amputation may be the preferable option for treatment of pain and infection.
Complications associated with percutaneous intervention include vessel thrombosis, embolization, dissection, and rupture. The majority of these are tolerated due to the severity of the disease being addressed. Acute complications necessitating amputation can occur and the patient should be made aware of the potential for limb loss. Outcomes for percutaneous intervention are improving with patency rates of intervened segments approaching 80% at 2 years for iliac artery stents and 70% at 2 years for superficial femoral artery stents. Aortoiliac revascularization utilizing aortobifemoral bypass has a 90% 5-year patency while femoral popliteal and femoral tibial bypasses have 70% to 80% and 60% to 70% 5-year patency rates, respectively. More importantly, limb salvage is greater than 90% in the majority of patients at 2 years and this is confirmed by large-scale data documenting a reduction in amputation rates in population-based studies.
Knowledge of diagnosis and management of PAD is important for the geriatrician based on a high prevalence of the disease within the older population. A trend toward noninvasive diagnosis and minimally invasive approaches is noted. However, a treatment plan based on knowledge of current treatment paradigms with attention to provider and patient-specific factors should be taken into consideration to achieve optimal treatment outcome.
CAROTID ARTERY STENOSIS
Definition
Proper diagnosis, management, and treatment of carotid stenosis are critically important for reducing risk of ischemic stroke in older patients. Carotid stenosis is defined as atherosclerotic narrowing of the extracranial cervical arterial circulation primarily located at the bifurcation and
extending into the proximal internal carotid artery. The stenosis and subsequent plaque rupture, embolism of plaque fragments, or platelet thrombi lead directly to the development of ischemic stroke primarily in the frontal and middle cerebral circulation.
Epidemiology
Stroke and its resultant disability are the third leading cause of death in the United States. In general, 80% of strokes are ischemic and 20% hemorrhagic. Among the ischemic strokes, 20% to 30% are attributed to atheroembolic disease due to carotid artery stenosis. The prevalence of asymptomatic carotid artery stenosis of moderate degree is 7.5% and severe stenosis is 3.1% in patients 80 years and older.
Presentation
A complete history is required to definitively classify patients as either symptomatic or asymptomatic as the treatment and aggressiveness of intervention for these two categories are markedly different. Neurologic symptoms including unilateral weakness, parasthesias, receptive or expressive aphasia, dysarthria, amaurosis fugax (transient unilateral loss of vision), as well as prior history of a documented transient ischemic attack (TIA) or stroke are significant findings to elicit. A clear understanding of the event’s time course is key in determining symptomatic status. By definition, patients who experience any of the above symptoms in conjunction with an obstruction of 50% or greater of the corresponding carotid artery are considered symptomatic. Patients with confirmed amaurosis fugax, TIA, or stroke in the past 3 months are at greater risk for stroke. It is also important to note that the following symptoms are not usually associated with carotid stenosis—generalized weakness, vomiting, nausea, vertigo, ataxia, and diplopia.
Evaluation
Recommendations for screening of asymptomatic patients are in constant debate. Multiple societies and current guidelines recommend against the evaluation of the asymptomatic patient. In the case of the older patient, the “do no harm” imperative becomes even more pronounced, as older patients do not benefit from asymptomatic carotid intervention to the extent that younger patients may. The benefit of operative carotid treatment in the
immediate postoperative period and in the long term in older and frail patients, particularly those with renal failure, is less pronounced. It is generally agreed upon that population screening examinations are not cost- effective for asymptomatic patients and only beneficial in highly selective patient populations such as patients undergoing coronary artery bypass grafting. Screening is not recommended for patients based solely on presence of an AAA, presence of a carotid bruit, or prior head and neck radiotherapy. In contrast, a patient with clear unilateral signs and symptoms of ischemia should undergo imaging of the cervical carotid circulation due to the change in risk-benefit equation that favors intervention based on symptomatic status and long-term survival. The need to ensure long-term survival in patients undergoing intervention should be emphasized with the recognition that the benefit of treatment accrues over time and patients on average should be expected to live 4 to 5 years after treatment (Table 78-1).
TABLE 78-1 ■ OUTCOMES AFTER CAROTID ARTERY STENTING IN MEDICARE BENEFICIARIES, 2005 TO 2009
Complete physical examination is important to assess the potential subtle signs of neurologic ischemia. Focused physical examination includes auscultation of heart and detection of potential cardioembolic source from an arrhythmia, palpation of pulses, cranial nerve, and neurologic assessment to include examination of face for unilateral weakness or facial droop, and musculoskeletal examination for overall strength and symmetry. Ophthalmic consultation for detection of Hollenhorst plaques may be indicated especially in the setting of amaurosis fugax.
Multiple noninvasive imaging studies can assess the carotid arteries including DUS, CTA, or magnetic resonance angiography (MRA). Invasive conventional DSA is reserved for the rare situation when noninvasive imaging is equivocal or if the patient is in need of an intervention to treat the
stenosis utilizing an endovascular approach. Each modality has utility in specific situations with each able to provide the degree of stenosis and characterization of the plaque’s morphology and location. Even though DSA is considered the “gold standard,” it is reserved only for questionable or contradictory findings on noninvasive imaging due to its inherent risk of stroke based on previous randomized trials.
DUS is an accurate, reliable, noninvasive imaging modality and is often the initial study to identify patients with disease. The degree of stenosis and plaque morphology of the carotids can be readily assessed but is operator dependent. CTA and MRA use contrast and can provide accurate imaging of the carotid arteries. Both tests are limited by allergic reactions with anaphylaxis or preexisting renal disease. However, both studies are very effective at imaging with high resolution both the cervical and intracranial carotid and vertebral circulation in addition to the brain itself. Both imaging modalities are more expensive than duplex imaging and both suffer from artifact such that an experienced interpretation is needed to accurately assess carotid plaque morphology and stenosis. DSA or catheter-based digital angiography provide excellent images which are easy to interpret regarding degree of stenosis, location, and plaque morphology. Again, it is reserved in patients with conflicting imaging prior to operation with limitations including risk of stroke, cost, and morbidity.
Management
Pharmacologic Initial management of both symptomatic and asymptomatic patients includes maximizing medical therapy with appropriate antiplatelet agents to prevent platelet aggregation and embolization in conjunction with lipid-lowering agents to stabilize the at-risk plaque in the carotid distribution. For patients who smoke, the risk of stroke nearly doubles with continuance and cessation will markedly reduce stroke risk. For the asymptomatic patient, aggressive medical management is thought to be equivalent to invasive intervention and ongoing trials are underway to address this issue. New data emphasize the additional need for tight blood pressure control (SBP < 120 mm Hg) and optimized diabetic management in stroke prevention. For symptomatic patients, in addition to medical management, the next step is confirming the presence of significant stenosis that would benefit from invasive intervention.
Interventional Once a diagnosis of carotid stenosis is made, several factors are taken into account when considering optimal treatment. One must consider multiple factors before embarking on interventional treatment including whether the stenosis is symptomatic or asymptomatic, degree of stenosis, anatomic ease of intervention, anticipated mortality rate over the upcoming years by assessment of medical comorbidities, frailty, and patient preferences regarding stroke risk reduction in the asymptomatic setting.
Generally medical management is advised for low-grade stenoses (< 50%) in both asymptomatic and symptomatic patients. Intervention is recommended in symptomatic patients with more than 70% stenosis and in asymptomatic stenosis with more than 80% stenosis in centers with a track record of excellent outcomes and in patients with an anticipated life expectancy exceeding 5 years. In symptomatic patients with 50% to 69% stenosis, risk- benefit analysis should be given careful consideration prior to intervention as studies support intervention, but only in centers that have excellent perioperative outcomes.
Carotid artery stenting (CAS) via percutaneous approach is an acceptable approach to treating carotid stenosis in selected patients based on specific indications. CAS is less invasive allowing for intervention in poor surgical candidates, especially those with cardiac disease. CAS also allows for angiography, angioplasty, and stent placement all with one procedure. It is indicated in patients with prior neck radiation, prior surgical treatment of the carotid artery or neck lesions, contralateral vocal cord injury, significant coronary artery disease, or congestive heart failure. CAS can be anatomically challenging due to aortic arch anatomy, especially in older adults where calcific lesions and poor arch anatomy due to natural aortic remodeling are present. In addition, significant controversy exists on the benefit and outcomes in octogenarians and older patients as multiple studies have documented increased stroke rates in older patients such that trials to date have excluded this population from inclusion due to poor outcomes compared to younger patients. Carotid endarterectomy (CEA) is generally preferred in patients older than 80 years for this reason.
CEA is the gold standard operative procedure for treatment of high-grade carotid stenosis. It has been shown to reduce future stroke risk in multiple well-done randomized trials compared against medical management. For patients with asymptomatic stenosis, CEA is the currently indicated procedure except in select circumstances. A patient with a life expectancy of
greater than 5 years should generally—except when there are neck anatomic concerns—be advised to undergo a CEA for asymptomatic disease with stenosis greater than 70% to 80%. Additionally, it is now becoming accepted that CAS in general carries a greater stroke risk compared to CEA. Newer techniques such as the Trans Carotid Artery Revascularization (TCAR) relies on a small neck incision for placement of a stent directly through the common carotid artery. The procedure uses a flow reversal system to limit embolization during the procedure which results in the lowest peri-operative stroke risk of any procedure. After intervention, patients are monitored overnight in the hospital and can be discharged the following day barring any complications or concerns. Antiplatelet agents are generally continued during the perioperative period to inhibit platelet adhesion and embolization as studies have shown decrease in stroke risk with an antiplatelet regimen.
Complications associated with CAS and CEA include stroke, hematoma, and death. CEA has the highest rate of cranial nerve injury compared to other revascularization techniques. However, complete recovery is typical and permanent injury with disability is rare. As noted above, risk of myocardial infarction is higher with CEA whereas perioperative stroke rates are higher in transfemoral CAS. Accepted 30-day stroke and death risk is less than 3% for asymptomatic patients and less than 6% for symptomatic patients. A major trial of open surgical versus endovascular treatment versus medical management is currently ongoing with results eagerly awaited to clarify the best option for management of carotid occlusive disease.
AORTIC ANEURYSM
Definition
AAA is a degenerative disease of the aorta characterized by inflammation and arterial wall degradation that lead to dilatation and possibly rupture. AAA is defined by increase in vessel diameter by more than 50%, generally over 3 cm in men. The majority of AAA remains asymptomatic until which time rupture or less commonly rapid expansion occurs leading to severe and unrelenting abdominal pain radiating to the back or a pulsatile abdominal mass.
Epidemiology
Aortic aneurysms occur 95% of the time in the infrarenal location and in 1% to 3% of people depending on the population screening criteria. Men
commonly present starting at the age of 65 with women having a noted delay in presentation. This results in a male-to-female ratio of AAA in patients less than 80 years of 2 to 1, whereas in patients greater than 80 years, the incidence is equal. Racial differences are present too with Caucasians having a greater than threefold incidence of AAA compared to non- Caucasians.
Pathophysiology
AAA is often thought to be caused by atherosclerosis, but no specific cause and effect has been demonstrated; rather a strong correlation exists. In contrast, some inciting event—generally thought to be smoking since greater than 95% of patients have a smoking history—is believed to trigger an inflammatory state where there is an unchecked proinflammatory process that is present within the media and adventitia of the aortic wall that results in degradation of the wall’s structure by matrix metalloproteinases (MMPs).
The end result is loss of structural integrity of the aortic wall with dilation and weakening leading to rupture.
Presentation
Multiple studies have now documented that detection of AAA in men older than 65 years reduces overall AAA mortality and is cost-effective. A thorough history is required in determining a patient’s risk for developing an AAA since the patient should ideally have their AAA detected when in the asymptomatic status as the risk of repair is markedly less than in patients with symptomatic unruptured and ruptured AAA. AAA risk factors include tobacco use (current or former), advanced age, coronary artery disease, atherosclerosis, high cholesterol, hypertension, first-degree relative affected, and male gender. Risk factors for expansion include advanced age, severe cardiac disease, prior stroke, and tobacco use. Independent risk factors for AAA rupture include female gender, large initial diameter, low forced expiratory volume in 1 second (FEV1), current smoking, and elevated mean
blood pressure. Patients with family history for inherited connective tissue disorders such as Marfan syndrome or Ehlers-Danlos are also at increased risk for developing AAA, albeit at younger age.
A complete physical examination is critical to assess overall patient function. Abdominal examination can be difficult especially with extreme obesity and may not reveal a pulsatile mass in the mid abdomen. AAAs over
5 cm can be detected on careful examination 76% of the time, whereas smaller AAAs between 3 and 3.9 cm are only detected 29% of the time thus reinforcing the need for screening examinations. In addition to abdominal examination, femoral and pedal pulses and cardiac and pulmonary examination should be performed at a minimum as these systems are commonly involved with atherosclerosis and end-organ dysfunction from long-term smoking.
Evaluation
Current screening recommendations are to obtain an abdominal ultrasound (US) examination on men older than 65 years who have any smoking history or older than 55 years with family history. Women should be screened with US at age 65 if there is a family history or smoking history. Currently Medicare offers US screening as part of their Welcome to Medicare Physical Examination to men who have smoked at least 100 cigarettes over their lifetime or any patient with a family history.
The Society for Vascular Surgery recommends the following surveillance algorithm once an AAA is detected. Surveillance imaging at 3-year intervals for AAAs between 3.0 and 3.9 cm, 12-month imaging for aneurysms between
4.0 and 4.9 cm, and imaging every 6 months for those patient with aneurysms between 5.0 and 5.4 cm. The above recommendations are based on an average growth rate of 0.1 to 0.4 cm per year for all aneurysms with smaller aneurysms growing at a lesser rate. Any symptoms potentially related to the aneurysm should prompt a repeat study to exclude rapid expansion. Recent studies seem to indicate a linear growth of aneurysms less than 5.0 cm in diameter.
As noted above, the initial screening study and the preferred method of surveillance is ultrasound. This is a simple, noninvasive, and painless test with no risk to the patient. However, the US is unable to fully image the aneurysm for screening purposes in many obese patients. Alternative modalities include both computed tomography (CT) and magnetic resonance imaging (MRI), which do not require dye to assess the presence of an aneurysm or its size.
Current CT imaging in the form of CT angiography provides excellent detail of the aortic aneurysm in addition to other potential intra-abdominal processes. CT is also more reproducible than US with more consistent measurements between examinations with operator variation removed.
Additionally, if performed as an initial scan, CT also allows examination of the entire aorta from aortic root to bifurcation and provides for 3D reconstruction of vessels which is key for planning both open and endovascular intervention. As CT has become more accessible, it is more commonly used to follow patients after intervention but exposes the patient to contrast dye and radiation, hence the preference to use DUS whenever feasible. MRI can also be used to evaluate AAA; however, due to expense, time, and limited access, MRI should not be first-line imaging for AAA. Its use should be reserved for patients in whom a CT scan is otherwise contraindicated.
Management
Pharmacologic Optimal medical management is directed at controlling comorbidities in patients with AAA. This includes smoking cessation, hypertension control, lipid control, diabetes management, diet and exercise, lifestyle modifications, and regular primary care provider follow-up. It should be noted that exercise and transient increases in blood pressure are not predictors or causes of rupture and therefore patients with an AAA diagnosis should continue to be active while being monitored. Aerobic activity including walking, jogging, biking, and swimming should be encouraged. Standard medical therapy for atherosclerotic occlusive disease should be provided including antiplatelet agents and statin therapy when indicated. Although it would seem β-blockade would reduce vessel stress and thus rupture, this has not been the case and antihypertensive regimens should follow accepted guidelines. A recent trial to determine the efficacy of doxycycline to slow the growth of small aneurysms failed to show any benefit, therefore leaving the patient with AAA with only surgical management options to reduce mortality risk from rupture.
Interventional The single goal of AAA management is to reduce the chance of AAA rupture. As this is purely a risk-benefit analysis, knowledge of rupture and subsequent death risk at given AAA sizes is necessary to make appropriate recommendations for intervention. The size of the AAA is the major determining factor for risk of rupture and thus the major indicator dictating the timing of an intervention. Less commonly, rapid rate of expansion—generally agreed to be greater than 0.50 cm in 6 months—is an indication for intervention. The risk of rupture correlates directly with size. AAA less than 4 cm having a 0% to 0.5% yearly rupture risk, 4.0 to 4.9 cm
having a 0.5% to 1.5% yearly rupture risk, 5 to 5.9 cm having a 1% to 11% yearly rupture risk, 6 to 6.9 cm having a 11% to 22% yearly rupture risk, and AAA greater than 7 cm having a 30% yearly rupture risk. It is generally accepted that AAAs greater than 5.5 cm for men and 5 cm for women have a high enough yearly risk for rupture that elective surgical intervention is indicated in good-risk patients. This recommendation is based on current data supporting a mortality risk of AAA repair of approximately 1% to 2% for both endovascular and open aneurysm repair.
Repair of infrarenal AAA is commonly approached by either endovascular or open surgical repair. Now three decades old, the endovascular approach uses femoral artery access for a graft placement under fluoroscopic guidance. Benefits of endovascular repair include less postoperative pain, shorter hospital stay, and lower 30-day mortality.
Additionally, these benefits allow expansion to candidates who would not tolerate an open procedure. In spite of these immediate perioperative benefits, numerous studies have documented similar quality of life, and similar long-term morbidity and mortality comparing the endovascular to open repair. The gain of 6 to 8 weeks of pain-free recovery and similar outcomes has led to the utilization of the endovascular approach as the first- line treatment in the geriatric population, with over 90% of patients now anatomically suitable for graft placement.
Open repair of an AAA requires either a transperitoneal or retroperitoneal approach, with midline laparotomy common. The open approach is generally used for patients who do not meet anatomic specifications for endovascular aneurysm repair, specifically those with a poor segment of aorta immediately distal to the renal arteries. Thus, the patients undergoing open repair in today’s era are generally at more risk than a standard AAA patient due to need for suprarenal clamping and concomitant risk of renal failure and complex anatomic characteristics that preclude endovascular repair. This generally includes renal artery involvement or complex pelvic anatomy that increases the risk of morbidity and mortality.
With proper diagnosis, surveillance, and medical management, patients can be monitored for years without needing surgical repair for their AAA, and with appropriate intervention, patients can have similar life expectancies compared to matched controls (Table 78-2).
TABLE 78-2 ■ AAA SURVEILLANCE IMAGING RECOMMENDATIONS
CHRONIC VENOUS INSUFFICIENCY
Definition
Chronic venous insufficiency (CVI) is a condition of altered blood flow in the leg veins usually caused by functionally incompetent venous valves leading to increased pressure in the distal venous vasculature. Less likely in the older adult population is the presence of a proximal obstruction leading to venous hypertension. Regardless of the etiology, the increased venous pressure at the level of the lower leg results in the findings classic for CVI.
Epidemiology
The prevalence of venous insufficiency varies considerably between genders, ethnic backgrounds, and age groups. In a general population, the age-adjusted prevalence for the whole population was 9.4% in men and 6.6% in women with increasing prevalence of CVI correlated closely with age and sex, being 21.2% in men greater than 50 years and 12.0% in women greater than 50 years. An important fact remains that the prevalence of CVI increases with age specifically in studies evaluating chronic leg ulceration.
Pathophysiology
Venous hypertension results from venous reflux due to valvular incompetence or obstruction due to thrombosis or narrowing of proximal veins. Superficial venous insufficiency is usually due to weakened valves or widened veins precluding normal valve coaptation. The most common location of the
valvular incompetence is located at the saphenofemoral junction where the greater saphenous vein drains into the common femoral vein. Deep vein thrombosis (DVT) creates venous insufficiency by creating inflammation and adhesion of venous valves leading to resultant narrowing and valvular dysfunction.
Regardless of superficial or deep as the cause of venous hypertension, the constantly elevated hydrostatic pressure creates the findings of edema and venous microangiopathy. A common finding in older adults is lipodermatosclerosis and hemosiderin deposition at the malleolar level.
Permanent skin hyperpigmentation occurs resulting from hemosiderin deposition as red blood cells extravasate and are deposited into the superficial tissues. Lipodermatosclerosis is the skin thickening and woody feeling which is due to fibrosis of subcutaneous fat. The ultimate outcome of this unopposed venous hypertension is the venous ulcer resulting from dysfunctional microcirculation and dermal weakness.
Presentation
The condition is characterized by symptoms including fatigue, discomfort, and a sensation of heaviness which worsen during the day. Physical examination findings such as swelling worsen as the day progresses and prior to ambulation. Late sequelae of chronic venous insufficiency includes lipodermatosclerosis, hyperpigmentation, stasis dermatitis, and venous ulceration. The diagnosis of CVI, similar to arterial conditions, can be challenging in the older patient due to coexisting conditions. In addition to neurologic complaints such as spinal stenosis, one must also consider vascular-related conditions such as congestive heart failure and chronic edema and lymphatic changes associated with joint replacement. A history focusing on the time course of swelling to include significant improvement when awakening in the morning after lying flat throughout the night with classic physical examination findings of lack of foot involvement and gaiter distribution of skin changes and associated varicosities will invariably be present. The history should also take into account conditions that may lead to valvular dysfunction such as previous deep venous thrombosis or superficial thrombophlebitis in addition to traumatic injuries to the superficial veins of the lower extremities. Particular attention should be paid to patients who are sedentary remaining in a sitting position for the majority of the day and night including those confined to a wheelchair or those who use a recliner to sleep
at night for pulmonary and sleep apnea issues as this creates unrelieved pressure in the veins that will need to be addressed in conjunction with treatment of the venous insufficiency.
Evaluation
The gold standard for initial evaluation of CVI is duplex ultrasonography. Utilizing ultrasound images of the affected vein can be obtained to look specifically for valvular dysfunction and size changes consistent with narrowing or dilation. Additionally, ultrasound can be used to assess venous reflux. Reflux time of greater than or equal to 0.5 seconds is considered significant within a superficial venous segment. CT venogram can be obtained if one has a suspicion for a proximal obstructing process such as May Thurner or more commonly in older adults extrinsic compression from a tumor. Rarely is invasive venography utilized for diagnostic purposes and is usually based on findings of noninvasive studies leading to a therapeutic intervention. Venous plethysmography is still utilized in specific circumstances by physicians specializing in venous disease.
Management
Noninvasive The mainstay of treatment is relieving venous hypertension with elevation of the legs when not ambulating and the application of compression therapy. The application of compression therapy has increased in recent times due to the ready availability of sufficient and inexpensive compression stockings not requiring a prescription or fitting. For patients with unusual lower leg anatomy, a formal fitting will likely provide a better compression result. In patients with ulcerations, compression is a must for ulcer healing and can take many forms including standard compressions stockings, ace wraps, and UNNA boots. With regards to ulcer healing, 90% of ulcers should heal given adequate compression compliance and normal arterial perfusion.
Invasive Recently treatment of superficial venous insufficiency has undergone a sea change in the ability to treat chronic insufficiency utilizing minimally invasive techniques. This ranges from in office sclerotherapy with or without ultrasound to the treatment of large vessel superficial incompetence of the saphenous system using advanced sclerosants, chemical sealing, and energy procedures aimed to obliterate the offending refluxing segment. Importantly, it has recently been shown that for patients with venous ulcerations and
superficial reflux, early venous ablation of superficial incompetence resulted in faster healing of venous leg ulcers and more time free from ulcers than deferred endovenous ablation. Therefore in the older patient with a venous stasis ulcer, one should implement compression and confirm presence or absence of superficial venous reflux amenable to intervention since a truly cost-effective, minimally invasive office procedure is now available to speed ulcer healing and prevent ulcer recurrence.
FURTHER READING
Aronow WS, Ahn C, Gutstein H. Prevalence and incidence of cardiovascular disease in 1160 older men and 2464 older women in a long-term health care facility. J Gerontol A Biol Sci Med Sci. 2002;57(1):M45–M46.
Chaikof EL, Dalman RL, Eskandari MK, et al. The Society for Vascular Surgery practice guidelines on the care of patients with an abdominal aortic aneurysm. J Vasc Surg. 2018;67(1):2-77.e2.
Criqui MH, Aboyans V. Epidemiology of peripheral artery disease. Circ Res.
2015;116(9):1509–1526.
Daly KJ, Torella F, Ashleigh R, McCollum CN. Screening, diagnosis and advances in aortic aneurysm surgery. Gerontology. 2004;50(6):349–359.
De Martino RR, Goodney PP, Nolan BW, et al. Optimal selection of patients for elective abdominal aortic aneurysm repair based on life expectancy. J Vasc Surg. 2013;58(3):589–595.
Egorova NN, Guillerme S, Gelijns A, et al. An analysis of the outcomes of a decade of experience with lower extremity revascularization including limb salvage, lengths of stay, and safety. J Vasc Surg. 2010;51(4):878– 885, 885.e1.
Ferguson GG, Eliasziw M, Barr HW, et al. The North American Symptomatic Carotid Endarterectomy Trial: surgical results in 1415 patients. Stroke. 1999;30(9): 1751–1758.
Hirsch AT, Criqui MH, Treat-Jacobson D, et al. Peripheral arterial disease detection, awareness, and treatment in primary care. JAMA.
2001;286(11):1317–1324.
Kwolek CJ, Jaff MR, Leal JI, et al. Results of the ROADSTER multicenter trial of transcarotid stenting with dynamic flow reversal. J Vasc Surg. 2015;62(5):1227–1234.
Lane R, Ellis B, Watson L, Leng GC. Exercise for intermittent claudication.
Cochrane Database Syst Rev. 2014;7: CD000990.
Lederle FA, Freischlag JA, Kyriakides TC, et al. Outcomes following endovascular vs open repair of abdominal aortic aneurysm: a randomized trial. JAMA. 2009;302(14):1535–1542.
LeFevre ML. U.S. Preventive Services Task Force. Screening for asymptomatic carotid artery stenosis: U.S. Preventive Services Task Force Recommendation statement. Ann Intern Med. 2014;161(5):356– 362.
McDermott MM. Lower extremity manifestations of peripheral artery disease: the pathophysiologic and functional implications of leg ischemia. Circ Res. 2015; 116(9):1540–1550.
Melin AA, Schmid KK, Lynch TG, et al. Preoperative frailty risk analysis index to stratify patients undergoing carotid endarterectomy. J Vasc Surg. 2015;61(3): 683–689.
Muluk SC, Muluk VS, Kelley ME, et al. Outcome events in patients with claudication: a 15-year study in 2777 patients. J Vasc Surg.
2001;33(2):251–257; discussion 257–258.
Noorani A, Hippelainen M, Nashef SA. Time until treatment equipoise: a new concept in surgical decision making. JAMA Surg. 2014;149(2):109– 111.
Oresanya L, Zhao S, Gan S, et al. Functional outcomes after lower extremity revascularization in nursing home residents: a national cohort study.
JAMA Intern Med. 2015;175(6):951–957.
Qureshi AI, Chaudhry SA, Qureshi MH, Suri MF. Rates and predictors of 5- year survival in a national cohort of asymptomatic elderly patients undergoing carotid revascularization. Neurosurgery. 2015;76(1):34–40; discussion 40–41.
Savji N, Rockman CB, Skolnick AH, et al. Association between advanced age and vascular disease in different arterial territories: a population database of over 3.6 million subjects. J Am Coll Cardiol. 2013;61(16): 1736–1743.
Voeks JH, Howard G, Roubin GS, et al. Age and outcomes after carotid stenting and endarterectomy: the carotid revascularization endarterectomy versus stenting trial. Stroke. 2011;42(12):3484–3490.
Chapter
79
Hypertension
Mark A. Supiano
INTRODUCTION
High blood pressure has the greatest impact on global attributable mortality of any risk factor. Compared with all other specific risks quantified in the Global Burden of Disease, Injuries, and Risk Factor studies, systolic blood pressure (SBP) of at least 110 to 115 mm Hg was the leading global contributor to preventable death in 2015. Three demographic changes—(1) the prevalence of elevated SBP (≥ 110–115 and ≥ 140 mm Hg) has increased substantially in the past 25 years, (2) the age-associated increase in blood pressure, and (3) the worldwide demographic increase in the aging population—are conspiring to create an enormous, emerging public health impact. In addition to the well-ascribed hypertension risk for cardiovascular disease (CVD) and stroke, it is also a significant risk factor for chronic kidney disease, atrial fibrillation, congestive heart failure (CHF) with both reduced and preserved left ventricular ejection fraction, and cognitive impairment—each with a relative risk between 2.0 and 4.0. A reduction of 10 mm Hg systolic and 5 mm Hg diastolic at age 65 is associated with a reduction of up to 25% in myocardial infarction, 40% in stroke, 50% in CHF, and 10% to 20% overall decrease in mortality. Despite this knowledge, current rates of hypertension control are extremely low, especially among older women. In addition to illustrating the clinical importance of hypertension, these data are a compelling call to improve both our knowledge concerning the mechanisms that underlie the age-associated increase in blood pressure to aid in its prevention as well as to implement changes in the systems of care necessary to improve blood pressure control among those with hypertension.
EPIDEMIOLOGY
Although high blood pressure should not be construed to be a normal aspect of aging, there is clearly an age-associated increase in blood pressure and in the prevalence of hypertension. Data from the National Health and Nutrition epidemiologic survey from 2015 to 2018 documented that hypertension is a very prevalent condition among older Americans (Figure 79-1). Based on this study’s updated definition of hypertension (discussed herein)—a self- report of BP result in excess of 130 mm Hg systolic and/or 80 mm Hg diastolic or those receiving an antihypertensive medication—the overall prevalence for hypertension among those aged 65 or older exceeds 65%. For women aged 75 and older, the prevalence is 85% and for men it is 84%. Of note, there is an age–gender interaction in hypertension prevalence across age. At younger ages, prevalence rates are higher among men, while above the age of menopause, the prevalence in women surpasses that of men.
FIGURE 79-1. Prevalence of hypertension in US adults ≥ 20 years of age by sex and age (NHANES, 2015–2018). Hypertension is defined in terms of NHANES blood pressure measurements and health interviews. A person was considered to have hypertension if he or she had systolic blood pressure ≥ 130 mm Hg or diastolic blood pressure ≥ 80 mm Hg, if he or she said “yes” to taking antihypertensive medication, or if the person was told on two occasions that he or she had hypertension. NHANES indicates National Health and Nutrition Examination Survey. Source: Unpublished National Heart, Lung, and Blood Institute tabulation using NHANES, 2015 to 2018. (Reproduced with permission from NHANES, 2015 to 2018. National Heart, Lung, and Blood Institute. US Department of Health & Human Services.)
Understand what key age-related physiologic changes account for the progressive increase in the prevalence of hypertension with age.
Explain the mechanisms for greater blood pressure variability with age, and understand why a hypertension diagnosis should never be based on a single elevated measurement.
Determine the benefit-based systolic blood pressure (SBP) treatment goal based on age, comorbidities, and cardiovascular and cognitive impairment risk factors.
Understand that arterial stiffness is an independent cardiovascular risk factor.
Select the best thiazide-type diuretic and other medication classes to treat geriatric hypertension.
Key Clinical Points
The prevalence of hypertension increases steadily with age.
Older people develop systolic hypertension due to the age- related increase in arterial stiffness. SBP and pulse pressure, both closely associated with arterial stiffness, confer the greatest significance as cardiovascular and cognitive impairment risk factors.
Age-related changes in systems that regulate blood pressure result in greater blood pressure variability. Therefore, careful attention is needed to accurately measure and diagnose hypertension, as well as monitoring for adverse drug events— especially postural hypotension—throughout treatment.
The diagnosis of hypertension should be based on the average of a minimum of nine blood pressure readings that have been obtained on three separate office visits or derived from 24-hour ambulatory or home blood pressure monitoring results.
Older hypertensive individuals commonly have physiologic characteristics that respond effectively to lifestyle modifications.
The focus of therapy should be on lowering the SBP to the patient’s benefit-based target goal. Applying benefit-based therapy to the majority of adults age 65 or older who are at high cardiovascular disease or cognitive impairment risk favors a SBP
Learning Objectives
goal of less than 130 mm Hg, and for some a goal of 120 mm Hg may be considered.
Thiazide-type diuretic drugs—notably chlorthalidone—are preferred as the initial drug class in most patients. Combination therapy with low doses of one or more agents should be considered if needed to achieve the target SBP level.
Current blood pressure control rates are inadequate. Systems approaches that incorporate geriatric approaches to team care combined with quality improvement strategies need to be adopted to improve treatment outcomes.
Another perspective on epidemiology is to examine the lifetime risk of developing hypertension as has been done in participants in the Framingham Heart Study. This study identified that among men and women participants who had normal blood pressure readings at age 55, nearly 90% developed hypertension over the ensuing 20 to 25 years of follow-up.
CLASSIFICATION
The definitions for normal blood pressure and categories of hypertension were updated in 2017 with the publication of the revised American Heart Association High Blood Pressure Clinical Guideline. Importantly, the prior 140 mm Hg SBP threshold level that defined hypertension was lowered to 130 mm Hg. Contemporary definitions and categories of blood pressure are provided in Table 79-1. Of note, the blood pressure categorizations make no adjustment for age. These definitions incorporate evidence that the cardiovascular risks associated with high blood pressure are continuous beginning at a level of 115/75 mm Hg. The definitions also emphasize that SBP is a more important CVD risk factor than diastolic blood pressure (DBP)—especially for individuals older than 50 years. Finally, since isolated diastolic hypertension is so uncommon among older patients, one may correctly classify an older patient’s hypertension in almost all cases based entirely on the level of their SBP.
TABLE 79-1 ■ BLOOD PRESSURE (BP) CATEGORIES
PATHOPHYSIOLOGIC CHARACTERISTICS
No single factor is likely to explain the cause of essential hypertension regardless of its age of onset. However, a number of age-related changes in physiology have been identified and summarized in Table 79-2 that likely contribute to the age-associated increase in blood pressure and to the prevalence of hypertension. Lifestyle factors such as obesity, especially central adiposity, being sedentary, and eating a diet high in sodium content are also contributors commonly identified among older individuals.
TABLE 79-2 ■ AGE-RELATED PHYSIOLOGIC CHANGES THAT CONTRIBUTE TO ELEVATED BLOOD PRESSURE
Homeostatic regulation of blood pressure within its relatively narrow normal range while continuously maintaining adequate cerebral perfusion requires intricate and dynamic coordination of several complex interacting physiologic systems. Under resting conditions, despite age-related physiologic changes that occur in these systems, older individuals experience little difficulty maintaining their blood pressure and cerebral perfusion.
However, when this balance is placed at risk by perturbations imposed by the intravascular volume shifts that occur with upright posture or following a meal, or the stimulus of exposure to one or more vasodilating medications, the older patient is less able to adapt and significant declines in blood pressure and inadequate cerebral perfusion may ensue.
Arterial stiffness, especially in the large central arteries, is the pathophysiologic characteristic that best exemplifies geriatric hypertension. The age-related increase in arterial stiffness is responsible for the type of hypertension most commonly encountered in older patients, namely, systolic hypertension with high pulse pressure (the difference between systolic and diastolic blood pressure). Moreover, several longitudinal studies have demonstrated that an individual’s pulse wave velocity—a marker of arterial stiffness—is a predictor for the subsequent development of hypertension.
Beyond this structural change in the arteries, the regulation of vascular resistance is also affected by age-related changes in the autonomic nervous system and in the vascular endothelium. There is an age-associated decline in the sensitivity of the arterial baroreceptor. This affects the regulation of vascular resistance in two important ways. First, a larger change in blood pressure is required to stimulate the baroreceptor to invoke the appropriate
compensatory response in heart rate. This contributes to the age-related increase in blood pressure variability and likely explains the greater prevalence of postural and postprandial hypotension observed in older individuals. Second, the decrease in baroreceptor sensitivity leads to relatively greater activation of sympathetic nervous system outflow for a given level of blood pressure.
Regulation of vascular resistance by the vascular endothelium is also changed in relation to age. Endothelial dysfunction demonstrated by a decrease in the production of endothelial-derived nitric oxide has been identified to accompany aging as well as hypertension. Impaired nitric oxide–mediated vasodilation is another potential contributor to the age- related increase in peripheral vascular resistance.
Age-related changes in renal function and in particular in renal regulation of sodium balance may also contribute to an increase in blood pressure.
Decreased renal blood flow and glomerular filtration rate impair the aging kidney’s ability to excrete a sodium load. These renal changes in the regulation of sodium balance create a tendency for sodium retention. This likely plays a part in the finding that a high proportion of older hypertensive individuals, perhaps as high as two-thirds, are characterized as having salt sensitivity. Salt sensitivity is operationally defined as an increase in mean arterial blood pressure, commonly 5 mm Hg or more, during a high compared to a low dietary sodium intake.
Aging also alters the renin-angiotensin-aldosterone system in ways that may contribute both to elevated blood pressure as well as sodium sensitivity. In general, older hypertensive subjects are characterized by having low levels of plasma renin activity. A direct relationship between plasma aldosterone levels within the physiologic range of normal and the future development of hypertension has been shown in normotensive individuals.
Since higher levels of aldosterone have also been linked with central obesity, vascular stiffness, blunting of baroreceptor sensitivity, impaired endothelial function, insulin resistance, and sodium sensitivity, it seems very possible that aldosterone may prove to be a unifying factor that accounts for many of the age-related changes in these physiologic features that also contribute to elevated blood pressure.
DIAGNOSTIC EVALUATION
Measurement Considerations
The first and most critical step in the diagnostic evaluation of hypertension among older individuals is the accurate measurement of blood pressure. In addition to the standard measurement instructions dictating patient preparation and positioning (minimum 5 minute rest in seated position with feet on floor), cuff size and type of instrument, several factors regarding appropriate blood pressure measurement deserve emphasis. As a result of the observation that blood pressure is more variable in older people, the dictum that “hypertension should never be diagnosed on the basis of a single blood pressure measurement” is especially true. Studies have documented that there is considerable overdiagnosis of hypertension among older people. For example, up to one-third of subjects who were receiving antihypertensive therapy when they enrolled in the Systolic Hypertension in the Elderly Program failed to meet entry blood pressure criteria for the study after their medications had been withdrawn. The diagnosis of hypertension should be based on the average of a minimum of nine blood pressure readings that have been obtained on three separate office visits or derived from 24-hour ambulatory or home blood pressure monitoring results.
Second, with respect to the appropriate measurement device, auscultatory methods are increasingly being supplanted by automated office blood pressure (AOBP) devices. These devices can record unattended BP readings over several minutes from which an average may be calculated. The limitations of auscultatory readings including challenges in accurately hearing the Korotkoff sounds, training requirements, and device calibration are all avoided with AOBP.
Third, while not directly related to the classification of hypertension, another important factor in blood pressure measurement is to always obtain supine and upright standing readings to determine if there is evidence for an orthostatic or postural decrease in blood pressure. The commonly used definition of postural hypotension is a decrease in SBP of 20 mm Hg or more from supine to upright positions within the first several minutes of standing. The presence of postural hypotension is an important risk factor for falls and may be exacerbated by almost all antihypertensive medication classes.
During enrollment visits for the Systolic Blood Pressure Intervention Trial (SPRINT), nearly 10% of potential participants were found to have a SBP below 110 mm Hg and, as a result were excluded from the study. Clearly,
identifying those patients with postural hypotension at the outset and throughout therapy is of critical importance.
Fourth, some individuals may have in-office blood pressure readings that are markedly elevated compared with their in-home, self-taken readings, commonly referred to as white coat hypertension. For these individuals, it is worth considering further evaluation with carefully taken home readings using an appropriately calibrated instrument or obtaining 24-hour ambulatory monitoring.
A final, fifth point concerning blood pressure measurement is to emphasize the primacy of systolic over diastolic blood pressure as the pressure that confers the most significance with respect to cardiovascular risk. Moreover, the pulse pressure, the difference between systolic and diastolic pressure, appears to outweigh either systolic or diastolic blood pressure as a cardiovascular risk factor.
Evaluation
Similar to younger patients, more than 90% of older hypertensive patients have essential hypertension. A diagnostic evaluation for secondary and potentially reversible causes of hypertension should be completed following the standard guidelines that have been developed for younger patients. There are several factors that deserve special attention in an older patient population. First, since the majority of hypertension in this population is systolic hypertension, older patients who present with primarily diastolic hypertension merit a careful evaluation with a focus on a renovascular cause. This is especially true for those who present with relatively abrupt onset of diastolic hypertension. Second, older patients are likely to be receiving a number of medications, some of which could be contributing to elevated blood pressure. A complete medication review is warranted to search for medications that may be implicated, for example, corticosteroids and nonsteroidal anti-inflammatory drugs including COX-2 inhibitors. Third, the prevalence of sleep apnea among older patients with hypertension is high and may be an important pathophysiologic explanation for their elevated blood pressure. Fourth, although the incidence of pheochromocytoma is rare, there is a suggestion from an autopsy study that the incidence of this condition increases with increasing age.
Target Organ Damage and Risk Factor Assessment
The evaluation should also include a determination of target organ damage, a cardiovascular risk factor assessment, and identification of comorbid conditions that may impact antihypertensive drug selection. Determining the extent of hypertension-related target organ damage may be complicated by the confounding effects of concurrent age- or disease-related changes. It is important to assess whether the patient has evidence of renal impairment, proteinuria, hypertensive retinopathy, electrocardiographic abnormalities, or left ventricular hypertrophy. An assessment of overall cardiovascular risk— smoking history, alcohol intake, dietary salt and fat intake, and level of physical activity—should also be completed. Finally, the presence of other comorbid conditions (eg, dementia, chronic kidney disease, chronic obstructive pulmonary disease [COPD]) and comorbidities such as frailty may influence antihypertensive medication selection as well as an individual’s blood pressure target goal.
APPROACH TO TREATMENT
Treatment Effectiveness
Results from meta-analyses of numerous placebo-controlled randomized clinical trials that have been conducted in older hypertensive patients have confirmed that significant reductions in cardiovascular and cerebrovascular morbidity and mortality occur with antihypertensive therapy and that the treatments are also safe. Active treatment leads on average to a 12% to 25% decrease in the rate of death, a 35% reduction in stroke, and a 25% reduction in myocardial infarction in addition to significant decreases in the development of mild cognitive impairment, chronic kidney disease, and CHF. For these reasons, there is a clear consensus that treating hypertension in older patients is safe and effective.
Results published from the Systolic Blood Pressure Intervention Trial (SPRINT), which compared usual (< 140 mm Hg) with intensive (< 120 mm Hg) SBP targets (ClinicalTrials.gov, NCT01206062), suggest that a lower SBP target may be particularly effective for some patients with high CVD risk. SPRINT included 2636 community living subjects aged 75 and older (28% of the entire study population) who were assessed for frailty status including usual gait speed, orthostatic hypotension, adverse events including injurious falls and nursing home placement, and, in the SPRINT-MIND subset, comprehensive cognitive evaluations and brain imaging. In the group
of older subjects randomized to the intensive arm there was a 34% reduction in the primary composite CVD outcome and a 33% reduction in all-cause mortality at 3.14 years of follow-up when the trial ended early due to its highly positive outcome (numbers needed to treat 27 and 41, respectively).
These results did not differ for the most frail subgroup nor for those with impaired gait speed. While some adverse events were higher in the intensive group, there was no difference observed in serious adverse events including injurious falls and also no group difference in self-assessed health-related quality of life regardless of frailty status. The SPRINT Memory and Cognition in Decreased Hypertension (MIND) component was designed to address the hypothesis that the incidence of dementia would be lower with intensive SBP treatment. Although the 17% reduction in adjudicated all- cause probable dementia in the intensive relative to the standard group did not achieve statistical significance, there were significant reductions of the same magnitude in the occurrence of mild cognitive impairment (MCI; 19%; P = 0.01) and in the composite outcome of MCI or dementia (15%; P = 0.02). The companion SPRINT-MRI study provided complementary results demonstrating that intensive therapy was associated with slower progression in the accumulation of white matter hyperintensity volume without significant differences in total brain volume. Longer term cognitive outcome data are currently being obtained in SPRINT MIND 2020 to further evaluate the dementia outcome as more cases accrue with extended follow-up.
Therapeutic Goals and Monitoring
In accordance with general geriatric principles, it is important to establish individualized patient treatment goals utilizing therapies that are least likely to produce adverse side effects or have a negative impact on quality of life. The inherent complexity and heterogeneity of older adults with multiple comorbidities that may include cognitive impairment and frailty who also have elevated SBP likely explains why it is proving to be so challenging to apply “one size fits all” treat-to-target therapy to this population.
The optimal SBP treatment goal for patients older than 65 years has evolved from the starting point of the seminal findings from the Systolic Hypertension in the Elderly Program (SHEP) study published in 1991 that for the first time demonstrated that treating what was then known as systolic hypertension was both safe and effective. Over time, subsequent clinical trials and guidelines have informed changes to the recommended SBP
treatment goals for older individuals (Figure 79-2). Results from the SPRINT study have been incorporated into the most current, 2017, American Heart Association High Blood Pressure Clinical Practice Guideline. Its recommendation for ambulatory, community-living older adults is a SBP goal of 130 mm Hg. Its recommendation for those with “a high burden of comorbidity and limited life expectancy” is to utilize clinical judgment in a patient-informed discussion.
FIGURE 79-2. Recommended systolic blood pressure (SBP) treatment goals for older individuals. The red line illustrates the changes in the recommended systolic blood pressure (SBP) goal sequentially over time by the Joint National Committee on the Detection and Prevention of Hypertension’s (JNC) 7 (published in 2003) and 8 (published in 2013) guidelines and the 2017 ACC/AHA guideline. The two major randomized controlled trials prior to the Systolic Blood Pressure Intervention Trial (SPRINT) that informed these changes—the Systolic Hypertension in the Elderly Project (SHEP) in 1991 and the Hypertension in the Very Elderly Trial (HYVET) in 2008—are superimposed with the entry SBP levels for their participants (red bar) and the achieved SBP for the placebo or standard (yellow bar) and active or intensive arms (green bars). It is important to recognize that the benefits observed with intensive therapy in SPRINT are relative to a standard arm whose SBP levels were below the level recommended in prior guidelines. (Reproduced with permission from Supiano MA, Williamson JD. New guidelines and SPRINT results: implications for geriatric hypertension. Circulation.
2019;140[12]:976–978.)
The most common treatment-related adverse side effect, shared by all antihypertensive medications, is the development of postural hypotension. Patients may present with atypical symptoms such as generalized weakness or fatigue rather than noting postural light-headedness or dizziness. For this reason, it is important not to treat blood pressure too aggressively and also to always determine supine and upright blood pressure measurements during
monitoring of all older patients. If a patient’s seated SBP cannot be lowered to below 130 mm Hg without the development of postural hypotension, it is prudent to consider modifying that patient’s target blood pressure goal to instead focus on their standing blood pressure.
When patients present with markedly elevated blood pressures in the absence of a true hypertensive emergency (eg, signs of target organ damage, hypertensive encephalopathy, intracranial hemorrhage, acute heart failure with pulmonary edema, dissecting aortic aneurysm, or unstable angina), it is not necessary and may in fact be deleterious to reduce blood pressure to normal values too rapidly. Setting an intermediate treatment goal of 160 mm Hg may be appropriate for these patients. Dosage adjustments or additions of new therapies should be made gradually over time to avoid overtreatment.
Similarly, once patients have reached their therapeutic target and have been maintained on stable therapy, their need for continued treatment should be periodically reassessed. Many patients will tolerate a dosage reduction or medication discontinuation during a carefully monitored withdrawal period, especially if they have been successful in achieving lifestyle modifications.
Lifestyle Modifications
Based on the physiologic profile of the typical older hypertensive patient described in the preceding section—overweight, sedentary, and salt- sensitive—lifestyle modifications directed toward these characteristics would be predicted to be especially efficacious. Additional reasons to focus attention on lifestyle modification are that they will be adjunctive if medications are also needed, will lead to improvements in other cardiovascular risk factors, are associated with other salutary outcomes (notably exercise), and are associated with minimal adverse effects. For patients with stage 1 hypertension (systolic levels between 130 and 139 mm Hg) who do not have diabetes, a 6-month treatment intervention with appropriate lifestyle modifications is the recommended first step.
Randomized controlled trials of multifactorial lifestyle interventions have been conducted and demonstrated the benefit in blood pressure reduction that is achieved as well as sustained in the intervention groups. A meta-analysis of 105 such trails (although few were directed solely to older subjects), demonstrated the overall benefits of weight reduction, aerobic exercise, and decreased intake of sodium and alcohol. Each of these modifications was
associated on average with a 5 mm Hg reduction in SBP, comparable to the reduction achieved with a single antihypertensive medication.
The Trial of Nonpharmacologic Intervention in the Elderly (TONE) targeted the effect of dietary sodium restriction and weight loss in older hypertensive patients. In this study, the intervention led to fairly modest declines in dietary sodium intake (average of 40 mmol/day) and body weight (average 4 kg), but there was a 30% decrease in the need to reinitiate antihypertensive therapy among the intervention group.
Pharmacologic Therapies
Overview Currently available evidence supports two general principles with respect to antihypertensive medication selection: one, that the level of blood pressure reduction achieved is more important than which drug is used, and two, that all classes of antihypertensive medications have been demonstrated to be equally efficacious in older patients. Following these principles, the initial antihypertensive drug selection should be based on patient-specific factors. For example, drug selection will depend on whether the patient’s hypertension is simple or complicated by another comorbid condition. The presence of a coexisting condition will often dictate the optimal medication (eg, an ACE inhibitor for patients with type 2 diabetes or CHF). Beyond these factors, medications that are least likely to produce adverse effects should receive first priority. For this reason, as a general statement, centrally acting antihypertensive medications and direct vasodilators are best avoided in older hypertensive patients due respectively to concerns regarding central nervous system sedating effects and their association with marked postural hypotension. In addition, attention should be paid to selecting a once-daily medication to promote adherence and to avoiding any medication interactions with the patient’s other medications.
General treatment recommendations for stage 1 hypertension are summarized in Table 79-3. For patients with stage 1 hypertension in whom a 6-month lifestyle modification intervention strategy has failed to lower blood pressure to the goal level, a thiazide-type diuretic is the most commonly recommended initial medication. Patients who present with stage 2 hypertension will almost certainly require at least two drugs to control their blood pressure—consequently, two antihypertensives should be initiated at the outset. Most often one of these two is a thiazide-type diuretic with the second agent selected either on compelling indications or on the basis of
synergy with the initial agent (eg, combined with ACE inhibitor). It should be noted that regardless of the drug choice, in general the starting dose should be reduced in older patients and dosage titration be carried out gradually.
TABLE 79-3 ■ GENERAL TREATMENT RECOMMENDATIONS FOR STAGE 1 HYPERTENSION
There are two additional general considerations to be made before a brief review of each of the major antihypertensive classes. (1) β-Receptor antagonists are not recommended as an appropriate choice for the initial antihypertensive drug, especially among older patients. Results from a meta- analysis concluded that unless there is a compelling indication for their use, β-blockers should not be considered as a first-line antihypertensive agent in older (60 years and older) patients. (2) Patient-specific factors that directly impact adherence also need to be taken into account. For example, thiazide diuretics are considered to be first-line agents, but persistence rates with their continued use are lower than with angiotensin receptor blockers. As
with any prescribed medication, cost, simplicity of the regimen, and absence of side effects are important factors impacting rates of adherence.
Diure tics Table 79-4 summarizes the advantages and disadvantages for each of the major drug classes from the geriatric patient perspective. There are several reasons why thiazide-type diuretics are considered to be the preferred initial antihypertensive agent for most older patients. The primary pathophysiologic explanation is that diuretic therapy has been noted to reduce SBP to a greater extent than diastolic blood pressure, and also achieves greater reductions in systolic pressure relative to other antihypertensive agents. Moreover, the majority of large-scale randomized controlled trials have utilized a thiazide-type diuretic in the treatment arm and there exist an abundance of outcome data demonstrating their therapeutic effectiveness in older hypertensive populations. Additional benefits include low cost, once-daily dosing, and a favorable side effect profile. The most common adverse drug events are metabolic abnormalities, especially hypokalemia, as well as hyperuricemia and impaired glucose intolerance; and urinary frequency or incontinence. However, these side effects are quite uncommon at lower doses. Within the thiazide diuretic class, chlorthalidone is the recommended medication. In addition to evidence of its efficacy from many randomized controlled trials, it has greater potency and a longer half- life than hydrochlorothiazide. When equipotent doses are compared, the incidence of hypokalemia is comparable. Finally, there is good synergy with most of the other commonly used medications, such that adding a second drug if needed to a thiazide-type diuretic is a reasonable approach.
TABLE 79-4 ■ ADVANTAGES AND DISADVANTAGES OF ANTIHYPERTENSIVE MEDICATION CLASSES SPECIFIC TO OLDER PATIENTS
Owing to the similarities observed between the physiologic effects of aldosterone and the age-related contributors to elevated blood pressure listed in Table 79-2, aldosterone receptor blockers (spironolactone or eplerenone) are other alternatives to consider.
Angiotensin-converting enzyme inhibitors and angiotensin receptor blockers ACE inhibitor agents and angiotensin receptor blockers are choices for initial therapy or as second agents in combination with a thiazide-type diuretic. Their advantages include the absence of central nervous system or metabolic side effects and overall favorable side effect profile. They are also often used owing to the recommendations for their use in the setting of coexisting type 2 diabetes or heart failure.
Calcium channel antagonists All three chemical classes of calcium channel antagonists have been shown to be effective in treating older hypertensive patients. Their mechanism of action—decreased peripheral vascular resistance—and lack of significant central nervous system or metabolic side
effects provide a good match with the characteristics of the geriatric patient. Age-related changes in the pharmacokinetics of these drugs (decreased clearance and increased plasma levels) mean that lower doses need to be used in older patients. The longer-acting agents in the dihydropyridine class of calcium channel antagonists have been the most widely studied in randomized controlled trials where their effectiveness in treating older patient populations has been demonstrated.
Adrenergic receptor antagonists As discussed previously, β-receptor antagonists are not an appropriate choice for monotherapy for older patients with uncomplicated hypertension. β-Receptor antagonists should be reserved for patients with a compelling indication for their use, namely, as secondary prevention for those patients who have had prior myocardial infarction or have coronary artery disease or in some patients with systolic dysfunction.
Several observations have limited the adoption of α1-receptor antagonists as first-line treatment for older hypertensive patients. In addition to their predilection to produce postural hypotension, subjects who received an α1-
receptor antagonist as monotherapy in the Antihypertensive and Lipid- Lowering Treatment to Prevent Heart Attack Trial (ALLHAT) were found to have a twofold higher risk of being hospitalized for heart failure relative to the subjects randomized to the diuretic arm of the study. Based on these observations, α1-receptor antagonist therapy should be considered for use as
monotherapy only in men in whom their use may be beneficial for symptoms related to benign prostatic hypertrophy, or in combination with another antihypertensive agent.
Barriers to Improving Blood Pressure Control
Since there is no cure for this chronic condition, effective treatment of hypertension requires a lifelong commitment to its management. For this reason, an approach that engages and sustains the patient’s motivation and adherence over time is needed. Several methods may be recommended to promote the patient’s efforts such as providing patient education materials appropriate for the patient’s health literacy level, clear instructions for diet and exercise lifestyle recommendations, and prescribing once-daily medications to facilitate adherence. Some patients may benefit from the feedback and engagement that accompany home or self-taken blood pressure monitoring. Another patient factor is the likelihood that the older
hypertensive patient will have two or more additional chronic conditions. The complexity imposed by concurrently managing these comorbid conditions becomes extremely challenging. This is especially the case when treating a frail older individual when it is not clear how to best prioritize which of several guidelines should take precedence or for that matter if the guideline is still applicable to the patient’s clinical situation.
In addition to these patient-specific factors, a number of barriers have been identified in the health care system that may impede progress in achieving better success in blood pressure control rates in the older population. The underdetection, undertreatment, and inadequate control of hypertension, especially among older patients, are well documented. Some of these system factors are limited access, lack of a team approach to care, constraints imposed by limited patient visit times, access to and costs of treatment, and the reimbursement system. Physician factors—the failure to modify treatment when the patient’s target blood pressure goal has not been achieved—also contribute to this situation. Many physicians overestimate their compliance with guidelines as well as the proportion of their patient populations who have blood pressure levels below their target. However, some quality improvement strategies have been demonstrated to be effective in improving hypertension management. The most effective strategies in this regard have involved a multidisciplinary team approach (assigning a nonphysician member of the team to assume responsibility for management), home blood pressure monitoring, and patient education. Thus, it appears that incorporating a geriatrics approach to hypertension management in the context of a quality improvement program is one effective way to eliminate some of the barriers to improving hypertension control in older patient populations.
The “Trial of Intensive Blood-Pressure Control in Older Patients with Hypertension,” was conducted in China shortly after SPRINT (ClinicalTrials.gov NCT03015311). 8511 Chinese patients with hypertension (age range 60 to 80 years) were randomized to an intensive treatment goal (SBP 100 to < 130 mm Hg) or a standard goal (130 to < 150 mm Hg). In addition to the different racial composition, relative to SPRINT, participants in the Strategy of Blood Pressure Intervention in the Elderly Hypertensive Patients (STEP) trial were younger (mean age 66.2 years) and in general had lower CVD risk. There were also differences in the blood pressure measurement protocols, the achieved SBP in the two arms, and the anti-
hypertensive drug regimens used to achieve the treatment goals. Nonetheless, similar to SPRINT, the trial ended early, at a follow-up of 3.3 years, when it was clear that its primary CVD outcome was met in favor of the intensive treatment goal. The STEP trial’s major conclusion that “a reduction in the systolic blood pressure to less than 130 mm Hg resulted in cardiovascular benefits in older patients with hypertension in China” is confirmatory of the SPRINT results.
UNANSWERED QUESTIONS AND FUTURE RESEARCH DIRECTIONS
Future research directions should target our understanding of the mechanisms underlying the age-associated increase in blood pressure, with important implications for prevention and management. For example, it seems clear that understanding the predictors and modifiers of vascular stiffness is of critical importance in preventing the age-associated development of hypertension.
Similarly, although none of the currently available antihypertensive agents specifically targets vascular stiffness, future advances in drug development aimed at preventing hypertension will likely address decreasing vascular stiffness as a mechanism of action. Balancing the competing risks between the SBP-related risk of stroke, heart failure, other cardiovascular events and cognitive impairment and the treatment-related risks, including adverse medication events, falls, and fall-related injuries, in a patient-centered approach remains a challenge that merits further research, particularly in the very frail older adult population. Finally, additional investigation will aim to elucidate why hypertension is a significant risk factor for cognitive impairment and dementia.
FURTHER READING
Beckett NS, Peters R, Fletcher AE, et al. Treatment of hypertension in patients 80 years of age or older. N Engl J Med. 2008;358:1887–1898.
Dickinson HO, Mason JM, Nicolson DJ, et al. Lifestyle interventions to reduce raised blood pressure: a systematic review of randomized controlled trials. J Hypertens. 2006;24:215–233.
Elliott WJ. Drug interactions and drugs that affect blood pressure. J Clin Hypertens. 2006;8:731–737.
Elmer PJ, Obarzanek E, Vollmer WM, et al. Effects of comprehensive lifestyle modification on diet, weight, physical fitness, and blood pressure control: 18-month results of a randomized trial. Ann Intern Med. 2006;144: 485–495.
Forette F, Seux ML, Staessen JA, et al. Prevention of dementia in randomised double-blind placebo-controlled Systolic Hypertension in Europe (Syst- Eur) trial. Lancet. 1998;352:1347–1351.
Forouzanfar MH, Liu P, Roth GA, et al. Global burden of hypertension and systolic blood pressure of at least 110 to 115 mm Hg, 1990-2015. JAMA. 2017;317(2):165–182.
Gueyffier F, Bulpitt C, Boissel JP, et al. Antihypertensive drugs in very old people a subgroup meta-analysis of randomised controlled trials. Lancet. 1999;353: 793–796.
Kaess BM, Rong J, Larson MG, et al. Aortic stiffness, blood pressure progression, and incident hypertension. JAMA. 2012;308:875–881.
SPRINT MIND Investigators for the SPRINT Research Group. Effect of intensive vs standard blood pressure control on probable dementia: a randomized clinical trial. JAMA. 2019;321(6):553–561.
SPRINT MIND Investigators for the SPRINT Research Group. Association of intensive vs standard blood pressure control with cerebral white matter lesions. JAMA. 2019;322(6):524–534.
Staessen JA, Gasowski JG, Thijs L, et al. Risks of untreated and treated isolated systolic hypertension in the elderly: meta-analysis of outcome trials. Lancet. 2000;355:865–872.
Steinman MA, Fischer MA, Shlipak MG, et al. Clinician awareness of adherence to hypertension guidelines. Am J Med. 2004;117:747–754.
Supiano MA, Williamson JD. New guidelines and SPRINT results.
Circulation. 2019;140:976–978.
Vasan RS, Beiser A, Seshadri S, et al. Residual lifetime risk for developing hypertension in middle-aged women and men: the Framingham heart study. JAMA. 2002;287:1003–1010.
Walsh JM, McDonald KM, Shojania KG, et al. Quality improvement strategies for hypertension management: a systematic review. Med Care. 2006;44:646–657.
Whelton PK, Appel LJ, Espeland MA, et al. Sodium reduction and weight loss in the treatment of hypertension in older persons: a randomized controlled trial of nonpharmacologic interventions in the elderly (TONE). JAMA. 1998;279:839–846.
Whelton PK, Carey RM, Aronow WS, et al. 2017 ACC/AHA/AAPA/ABC/ACPM/AGS/APhA/ASH/ASPC/NMA/PCNA
guideline for the prevention, detection, evaluation, and management of high blood pressure in adults: a report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines. J Am Coll Cardiol. 2018;71:e127–248.
Williamson JD, Supiano MA, Applegate WB, et al. Intensive vs standard blood pressure control and cardiovascular disease outcomes in adults aged ≥75 years: a randomized clinical trial. JAMA. 2016;315(24):2673– 2682.
Wright JT Jr, Williamson JD, Whelton PK, et al. SPRINT Research Group. A randomized trial of intensive vs standard blood pressure control. N Engl J Med. 2015;373(22):2103–2116.
Zhang W, Zhang S, Deng Y, et al. STEP Study Group. Trial of intensive blood-pressure control in older patients with hypertension. N Engl J Med. 2021;385(14):1268–1279
Chapter
80
Respiratory System and Selected Pulmonary Disorders
Daniel Guidot, Patty J. Lee, Laurie D. Snyder
INTRODUCTION
Respiratory function is the interplay between the functioning of the lung tissue itself, the airways, the muscles of respiration, and the bones and joints of the thorax. In addition, the respiratory system interacts closely with other organs like the heart. Changes in each and all of these different parts ultimately affect how one breathes. By understanding how each of these parts changes with aging, one can understand how the respiratory system as a whole changes with time.
Embryonic growth and development of the lung is a complex process, and after birth the development of the lung continues. Just as the normal body grows into young adulthood, the lungs and their function progress as well.
Eventually, however, the lungs reach their peak capacity, usually well above the requirements needed for normal respiratory functioning. With normal aging, the pulmonary function declines as lung tissue, chest wall, and muscles change over time. For many patients, this decline never reaches the threshold of meaningful pulmonary disease (Figure 80-1). However, in other individuals, the accumulated insults and injuries lead to an accelerated aging process and pulmonary disease that can lead to breathlessness, decreased exercise capacity, and eventually death. Thus, when considering the aging process of a person’s lung, one must take into account their childhood lung
Pulmonary
SECTION B
development, the cumulative burden of lung damage that they have incurred already, and any factors that may accelerate the decline in the lung function.
FIGURE 80-1. The figure shows lung function by time with a threshold below which patient experience symptoms. After reaching peak function, lung function declines with aging.
Accumulated injuries lead to an accelerated aging process and pulmonary disease that can cause dyspnea, decreased exercise capacity, and death.
CELLULAR LUNG CHANGES WITH AGING
Learning Objectives
The lung undergoes many cellular changes over time. These changes are the result both of the natural aging process as well as the fact that the lung is continually exposed to a range of stress insults, particularly from the external environment. Eventually the cells of the lung lose the ability to divide and differentiate in a process known as senescence. Senescent cells are characterized by arrested cell proliferation coupled with decreased apoptosis and a senescence-associated secretory phenotype (SASP) that is specific to the type of cell. While senescent cells can prevent the propagation of damaged cells, they also limit proliferation for tissue repair. Initially, the SASP may be immunosuppressive and promote wound healing, but persistence of the secretory phenotype leads to chronic inflammation and profibrotic processes.
Identify the structural, physiologic, and cellular changes of the lung with age.
Describe how aging processes relate to lung diseases in older patients.
Key Clinical Points
Pulmonary function declines over time in healthy individuals leading to changes in oxygenation, ventilation, and the ability to fight infections.
Lung disease impacts pulmonary function and can accelerate pulmonary age-related decline.
Changes in the immune system with aging lead to decreased response to antigens and a proinflammatory state, which increases the risk for severe lung infections.
Best practices for pulmonary care of older patients includes avoidance of tobacco products and environmental pollution, maintaining ideal body weight, considering appropriate vaccinations, and if hypoxic, oxygen supplementation.
Cellular senescence increases with aging and can be induced by a variety of insults, including oxidative stress, telomere shortening, DNA damage, inflammation, and mitochondrial stress. Environmental insults and disease states can increase oxidative stress to the lungs. Oxidative stress is the process by which highly reactive molecules arise inside cells, leading to damage to the structures within the cell. These molecules arise from the normal processes of oxygen metabolism within the cell. However, toxic environmental exposures and inflammatory states can increase oxidative stress and accelerate this process. With time, progressive oxidative stress can affect how effectively lung cells carry out their various functions and promote accumulation of senescent cells.
Cells are actively dividing and replenishing the tissues over time. With each cell replication, a small amount of genetic information is lost and discarded with each end of the DNA molecule. To protect against loss of vital information, chromosomes have segments of redundant DNA at each
end, known as telomeres. However, after cells have undergone enough divisions, they can exhaust their telomeres, resulting in a process where each new cell division results in a cell with slightly less DNA than its parent cell. Genetic predispositions for short telomeres (known as short telomere syndromes) and cellular processes that require increased cell turnover can exacerbate the exhaustion of telomeres, leading to senescence.
In addition, changes in the immune system also occur with aging and impact the lung function and ability to respond to infections. Over time, the body acquires more memory T cells from previous antigen exposure. In addition, the pool of naïve T cells declines over time. This results in an immune system with less capability to respond to new antigens, a state of immune senescence. Thus, when exposed to new infections, the adaptive immune system has less capacity to respond leading to increased innate immune response and increased inflammation and oxidative stress as well as reduced antigen clearance.
Cellular function can change with aging. The cells that line the surface of the alveoli and bronchi in the lungs have cilia, and the sweeping motion of the cilia are an important part of the clearance of debris and pathogens from the lungs. As the lungs age, the cilia have reduced movement, leading to reduced clearance of mucus. Cellular aging also affects the extracellular matrix of cells, the system of molecules on the surface of cells that help them maintain their normal structure and function. As cells age, the extracellular matrix changes to be composed less of elastin and more of collagen, a more rigid and less flexible molecule. This results in lung tissue that is stiff and has a reduced ability to return to its natural shape.
The end result of all of these cellular changes is that, with aging, the lung demonstrates reduced capacity to respond to stress, greater propensity toward proinflammatory cytokines, abnormal tissue repair, and increased susceptibility to infection.
MECHANICAL AND FUNCTIONAL LUNG CHANGES WITH AGING
Pulmonary function testing (PFT) is a widely used procedure in pulmonary medicine to measure lung volumes and lung mechanics. While many different studies can be performed, this chapter will focus on the three most commonly performed tests: spirometry, lung volumes, and diffusion capacity.
Spirometry
Spirometry measures the ability to move air in and out of the lungs. The patient maximally inhales and then maximally exhales into a closed-circuit tube, and the volume of air that they are able to exhale is measured. The total amount of air exhaled during this forced exhalation maneuver is called the forced vital capacity (FVC). The FVC can then be further subdivided over specific time points. The most clinically important subdivision is the amount of air exhaled within the first second of exhalation, called the forced expiratory volume in 1 second (FEV1). By taking the ratio of FEV1 to FVC,
the clinician can evaluate for obstruction. The greater the airways obstruction, the longer it takes to exhale and the lower the FEV1 to FVC ratio.
Lung Volumes
At the end of a full exhalation, there is no way to know how much air remains in the lungs. Thus, spirometry alone cannot measure total lung volumes.
However, the PFT lab can use additional techniques to measure the volume in the lungs that can take advantage of gas exchange or plethysmography (change of pressure over volume). Using these methods, one can calculate the amount of air in the maximally inflated lung (total lung capacity or TLC) as well as the amount of air trapped in the lungs after maximal exhalation (residual volume or RV). These measures of TLC and RV provide additional insights into the lung function.
Diffusion Capacity
Finally, PFT can assess the ability of the lungs to extract oxygen from the air. Carbon monoxide (CO) is a toxic molecule with higher affinity for hemoglobin than oxygen. However, very low levels can be safely inhaled, and the body’s ability to absorb carbon monoxide can be used as a surrogate for measuring the gas exchange of oxygen.
Several structural changes in the airways of the lungs occur as part of the normal aging process and can variably affect the spirometry measurements. In the airways, accumulated damage leads to bronchial thickening and hyperplasia, and increases in sympathetic tone over time result in increased smooth muscle tone. In addition, diaphragmatic weakness progresses with aging which reduces the maximal intake of air. The final result of all of these processes is that, with aging, the FEV1 declines over time. Reduced
diaphragmatic strength also reduces the FVC. Loss of elasticity of the lung results in more air residing in the lung at the end of maximum exhalation, resulting in more air trapping and therefore a reduced FVC. However, the reductions in FVC are less in the reductions in FEV1. Thus, the FEV1 to FVC
ratio drops with aging. In other words, aging lungs develop progressive obstruction.
With aging, the lung elasticity changes. The inward pole of the lung tissues is reduced with aging, resulting in an increased tendency for the lungs to rest at a large volume and an increase in TLC with aging. At the same time, however, the chest wall itself becomes smaller with loss of vertebral height and calcification within the rib joint articulations, which increases the force required to expand the chest. These processes reduce TLC with age.
The net result is that, with aging, TLC remains relatively constant.
With aging and the progressive accumulation of pulmonary insults, capillary destruction occurs over time. This results in a reduced body of small blood vessels that could absorb oxygen from the air. Additionally, reduced elasticity of the lung results in greater stretching of the lung at the apices due to the pull of the gravity. This results in greater amounts of air being at the top of the lung. Blood flow, which is greatest at the bases of the lung, goes to regions with the least amount of air, resulting in a ventilation– perfusion mismatch. The cumulative result of this is that, with aging, diffusion capacity for carbon monoxide (DLCO) declines.
AGING IN KEY DISEASE STATES
Idiopathic Pulmonary Fibrosis
As opposed to diseases that affect the airways (such as COPD and asthma), interstitial lung disease, also known as ILD, is a disease of the lung tissue itself. It is a group of rare disease subtypes with varying causes and clinical manifestations. The most common form of ILD is idiopathic pulmonary fibrosis, or IPF, a disease in which the lung tissue is slowly and progressively consumed by fibrous scar tissue. The exact etiology of IPF is unknown but it is more common in patients older than 60 years. This association with older individuals highlights that the aging lung is more prone to abnormal fibrosis.
Patients with IPF most commonly present with gradual and insidious onset of progressive dyspnea with exertion, and there are commonly
significant delays in diagnosis. In any older patient who presents with progressive dyspnea on exertion, IPF should be in the differential. Patients can also have cough, fine crackles at the bilateral lung bases, and in advanced disease, hypoxia with possibly cyanosis or clubbing of the fingernails. PFT usually reveals reduced TLC and reduced DLCO. Ultimately, the most important diagnostic modality for suspected IPF is CT chest imaging. IPF does have a classical presentation on CT imaging, known as a usual interstitial pneumonia pattern with reticular changes, honeycombing, and traction bronchiectasis (Figure 80-2).
FIGURE 80-2. A. Idiopathic pulmonary fibrosis CT scan. In this image, there is traction bronchiectasis, septal thickening, and architectural distortion. There is honeycombing with subpleural distribution. B. Two images from same patient, different sections, same scan date. Image on left is mid-lung field slice with evidence of subpleural predominant honeycombing and reticulation. Image on right is lower lung fields showing subpleural involvement but more extensive disease, honeycombing, and traction bronchiectasis. (Reproduced with permission from Dr. Snyder.)
In patients with suspected or confirmed IPF, referral to pulmonologist is standard of care. Primarily a fibrotic disease, IPF treatment centers on antifibrotics, namely nintedanib and pirfenidone rather than immunosuppressive medications like chronic steroids or other
immunosuppressants. The antifibrotic medications slow the progression of restriction in lung function studies, but do not reverse any scarring that has already occurred. As such, these medications may not lead to immediate improvements in the current symptoms and quality of life of patients.
Additionally, the antifibrotics have significant gastrointestinal side effects including nausea and diarrhea, and discontinuation of these medications due to intolerance is not uncommon. Thus, the decision to initiate or maintain antifibrotics must also be tailored to each individual patient.
Even with therapies, IPF is a steadily progressive disease with high morbidity and mortality, with expected median survival from time of diagnosis on the order of only a few years. In addition, patients with IPF can have flares whereby known or unknown triggers (including infections and clots) can lead to life-threatening bouts of severe inflammation and scarring. As such, for suitable patients diagnosed with IPF, physicians should consider referral for lung transplantation evaluation.
COVID-19
The COVID-19 pandemic highlights how the immune system of the aging lung responds differently to infections. COVID-19 is a viral infection characterized by fever, cough, fatigue, loss of smell, and in some patients, shortness of breath. In patients with respiratory symptoms, chest imaging shows bilateral pulmonary infiltrates consistent with diffuse alveolar damage. Patients may progress to acute respiratory distress syndrome and require mechanical life support. This evolution to respiratory failure occurs in a highly proinflammatory state that drives activation of disseminated intravascular coagulation and subsequent multiorgan failure. Older patients with COVID-19 are particularly at risk for this proinflammatory state. The older patient’s adaptive immune system limits viral clearance with the relative decrease in T and B cells and decreased ability to respond to specific antigens. This failure of the adaptive immune response combined with the proinflammatory response of the innate immune system sets the stage for an overwhelming immune response in the lungs, contributing to lung failure and systemic immune activation.
BEST PRACTICES IN PULMONARY MEDICINE TO COUNTER THE AGING PROCESS
The healthy aging lung’s pulmonary function declines over time. That said, patients and clinicians can take important steps to mitigate both the aging process as well as the effects it has on lung health and function. One of the most important intervention is the avoidance of deleterious exposures such as tobacco and air pollution. Exposures like these trigger a proinflammatory response and further direct damage to the lung tissue, accelerating the aging process and contributing to faster declines in lung function. For patients with respiratory disease at any age, avoidance of further damage to the lungs is key. This also includes preventing pulmonary infections. Older patients should receive vaccinations, specifically vaccinations against influenza, pneumococcal pneumonia (13-valent and 23-valent vaccines), and now COVID-19.
Maintaining an ideal body weight is also key to maintaining good lung health. Obesity not only increases the amount of work on the body during normal activities but also contribute to specific concerns of truncal obesity. Maintaining a healthy weight and minimizing abdominal fat make it easier for the respiratory muscles including the diaphragm to function. Ensuring good lung health with aging also requires maintaining the health systems that work with the lungs as part of respiratory function. This includes maintaining the health of the cardiovascular system and managing conditions like heart failure. Additionally, conditions like osteoporosis can lead to vertebral fractures, which can lead to loss of chest wall height and size and reduced ability to breathe. Treatment of osteoporosis and prevention of fractures can help promote lung health in the future.
FURTHER READING
Aging not only affects respiratory function but also the ability for the body to recover from stresses and illnesses. Especially for aging patients with chronic cardiac or pulmonary conditions, illnesses and exacerbations can lead to significant decline in function. With aging, the recovery process can be slow and incomplete. Cardiac and pulmonary rehabilitation is vital in mitigating and reversing functional declines that result from acute illnesses. Referral to these services should be considered for any aging patient with chronic heart or lung conditions recovering from an acute illness.
Bush A. Lung development and aging. Ann Am Thorac Soc. 2016;13:S438– S446.
Campisi J. Cellular senescence and lung function during aging. Yin and Yang.
Ann Am Thorac Soc. 2016;13 Suppl 5:S402–S406.
Cho SJ, Stout-Delgado HW. Aging and lung disease. Annu Rev Physiol.
2020;82:433–459.
Cunha LL, Perazzio SF, Azzi J, Cravedi P, Riella LV. Remodeling of the immune response with aging: immunosenescence and its potential impact on COVID-19 immune response. Front Immunol. 2020;11:1748.
Faner R, Rojas M, Macnee W, Agusti A. Abnormal lung aging in chronic obstructive pulmonary disease and idiopathic pulmonary fibrosis. Am J Respir Crit Care Med. 2012;186(4):306–313.
Ito K, Barnes PJ. COPD as a disease of accelerated lung aging. Chest.
2009;135(1):173–180.
Liu R-M, Liu G. Cell senescence and fibrotic lung disease. Exper Geront.
2020;132:110836.
Lowery EM, Brubaker AL, Kuhlmann E, Kovacs EJ. The aging lung. Clin Interv Aging. 2013;8:1489–1496.
Murray MA, Chotirmall SH. The impact of immunosenescence on pulmonary disease. Mediators Inflamm. 2015: 692546.
Pardo A, Selman M. Lung fibroblasts, aging, and idiopathic pulmonary fibrosis. Ann Am Thorac Soc. 2016;13 Suppl 5:S417–S421.
Parikh P, Wicher S, Khandalavala K, Pabelick CM, Britt RD, Prakash YS. Cellular senescence in the lung across the age spectrum. Am J Physiol Lung Cell Mol Physiol. 2019;316:L826–L842.
Pellegrino R, Viegi G, Brusasco V, et al. Interpretative strategies for lung function tests. Eur Respir J. 2005;26:948–968.
Thannickal VJ, Murthy M, Balch WE, et al. Blue Journal Conference: aging and susceptibility to lung disease. Am J Respir Crit Care Med.
2015;191(3)261–269.
Weyand CM, Goronzy JJ. Aging of the immune system: mechanisms and therapeutic targets. Ann Am Thorac Soc. 2016;13 Suppl 5:S422–S428.
Chapter
81
Chronic Obstructive Pulmonary Disease
Carolyn L. Rochester, Kathleen M. Akgün, Jennifer D. Possick, Jennifer M. Kapo, Patty J. Lee
INTRODUCTION
The population of adults older than age 65 is increasing in the United States and elsewhere in the world. Among older persons, respiratory symptoms are prevalent and associated with adverse outcomes. Dyspnea, for example, is reported in one-third of older persons and is associated with physical inactivity, impaired mobility, disability in activities of daily living, and death. Chronic bronchitis occurs in 15% of older persons and is associated with physical inactivity, reduced lung function, chronic obstructive pulmonary disease (COPD) exacerbations, and death. Wheezing occurs in 12% of older persons and is associated with physical inactivity and death.
The occurrence of respiratory symptoms frequently raises concerns regarding COPD. Currently the fourth leading cause of death in the United States, it is projected to be the fifth leading cause of disability and third cause of death worldwide by 2030. Older persons are at high risk of developing COPD, given both the age-related decline in physiologic capacity and the cumulative effect of frequent exposures to tobacco smoke, respiratory infections, air pollutants, and occupational exposures across the lifespan.
The prevalence of COPD varies across countries. A 2015 systematic review of population-based studies revealed an estimated global prevalence of 10% to 12% (up to 384 million cases) as of 2010. Epidemiologic surveys of COPD are often based on the symptoms of chronic bronchitis or
physician-diagnosed emphysema/COPD, yielding prevalence rates of greater than or equal to 11.6% in US adults older than age 65. Alternatively, and more consistent with clinical guidelines, COPD is defined spirometrically as
the presence of chronic airflow obstruction. When established by age- appropriate diagnostic thresholds, spirometry-confirmed COPD has a prevalence of 15.4% in US adults age 65 or older. It is estimated that half of adults with COPD will be 75 years or older by 2030. COPD prevalence in the United States is higher among adults with history of tobacco smoking, those with lower income, and those who use public insurance. Many individuals with COPD are never diagnosed or not diagnosed until they are hospitalized with severe respiratory symptoms or respiratory failure.
Learning Objectives
Identify the most common clinical definitions of chronic obstructive pulmonary disease (COPD) and their limitations in older adults.
Establish goals of care in older adults with COPD, most often directed at relieving symptoms, improving exercise tolerance and health status, reducing the risk of disease progression and exacerbations, as well as managing comorbidities.
Key Clinical Points
Age is a major risk factor for respiratory symptoms and the development of COPD due to age-related changes in lung physiology, and a greater exposure to COPD risk factors, particularly a higher prevalence of “ever smoking” or other relevant exposures in the older adult population.
Although symptoms consistent with COPD (eg, cough, hypersecretion of mucus) are common among seniors, two-thirds of persons who have symptoms of chronic bronchitis and half of those with physician-diagnosed emphysema/COPD have normal spirometry (ie, do not have airflow obstruction, yet may be at risk for poor clinical outcomes).
A reduced FEV1/FVC establishes a diagnosis of airflow obstruction and at least partial irreversibility is required to demonstrate chronic airflow obstruction, the hallmark of COPD.
Define the palliative care needs of older adults with advanced COPD, including symptom management and potential indications for referral to hospice.
However, normal age-related airflow limitation is also characterized by a reduced FEV1/FVC, and the threshold used to define COPD in seniors must account for normal aging.
The most often used criteria for establishing and staging airflow obstruction are based on the Global Initiative for Obstructive Lung Disease (GOLD) criteria. The age-related limitations of the GOLD guidelines include two fundamental flaws: (1) GOLD defines a reduced FEV1/FVC by a fixed
ratio of 0.70, which does not distinguish between age-related airflow limitation and COPD (disease)-related airflow obstruction; and (2) GOLD expresses the FEV1 as a percentage of predicted value, thus failing to account for age- related variability in spirometric performance. This leads to
potential risk of overdiagnosis of COPD (ie, disease, as
compared to age-related airflow obstruction, in older adults). The reduced FEV1/FVC ratio among older adults for whom the ratio is normal for age is not associated with respiratory symptoms, exercise intolerance, impaired mobility, COPD hospitalization, or mortality.
The Global Lung Initiative (GLI) has recommended the
lower limit of normal be instead defined as the fifth percentile distribution of Z-scores (Z-score of –1.64), and use reference equations that include Americans and many other ethnicities (worldwide), as well as age range of up to 95 years. Prior work has shown that airflow obstruction defined by an FEV1/FVC Z-score less than –1.64 is associated with
respiratory symptoms, impaired mobility, frailty status, COPD hospitalization, and mortality.
Reductions in FVC related to kyphosis/scoliosis, obesity, respiratory muscle weakness, and other factors may cause “pseudo-normalization” of the FEV1/FVC ratio, and in turn mask the presence of COPD.
Treatment of COPD in seniors is complicated by difficulty with drug administration (eg, due to cognitive impairment, physical disability) and may require additional teaching/coaching and assessment to achieve optimal outcomes. Clinicians should be aware of potential adverse effects of COPD pharmacotherapies, including arrythmia, constipation, urinary retention, pneumonia, glaucoma, osteoporosis, compression fracture, thrush, and skin bruising.
COPD exacerbations are a major cause of worsening symptoms, disability, hospitalization, and death. Emphasis on prevention and early treatment of exacerbations is a key aspect of COPD care. Treatment of acute exacerbations includes:
Intensified bronchodilator therapy.
Corticosteroids should be considered for severe exacerbations, but side effects are common in seniors; for example, delirium is more common particularly at doses exceeding 60 mg daily.
Antibiotics should be considered for those exacerbations associated with increased sputum purulence and/or volume, especially for moderate or severe exacerbations. Older adult residents of skilled nursing facilities, or those who have spent time in acute-care or subacute rehab facilities within 90 days of exacerbation, and persons with severe airflow obstruction (FEV1 < 30% predicted or FEV1 Z-score < –2.55) are more
likely to be colonized with resistant organisms (methicillin- resistant Staphylococcus aureus and multidrug-resistant gram-negative organisms [eg, Pseudomonas aeruginosa]). Thus, these latter risk factors together with local antibiotic resistance patterns should guide choice of empiric therapy. The duration of antibiotic therapy, in the absence of complicating factors such as pneumonia or bronchiectasis, is typically 3 to 7 days.
d. Discharge planning should consider referral for pulmonary rehabilitation, the need for home oxygen and/or noninvasive ventilation, and a recalibration of maintenance therapy; because new treatment regimens increase the likelihood of medication nonadherence or errors, formal medication reconciliation and pharmacist-based review should be implemented.
COPD DEFINITION
COPD is defined by the Global Initiative for Chronic Obstructive Lung Disease as “a common, preventable and treatable disease that is characterized by persistent respiratory symptoms and airflow limitation that is due to airway and/or alveolar abnormalities usually caused by significant exposure to noxious particles or gases and influenced by host factors including abnormal lung development.”
RISK FACTORS FOR COPD
Advancing age is accompanied by a high frequency of COPD risk factors. The most important of these is tobacco smoke, accounting for the majority of COPD cases in the United States and other high-income countries. In recent cohorts of older Americans, the prevalence of ever-smokers was 56%, including 9% as current smokers, and, among never-smokers, 32% had exposure to second-hand smoke. The global prevalence of daily smoking was estimated at 15% in 2015. Although tobacco use has declined in many countries since the 1990s, active smoking contributed to an estimated 1.23 million deaths in 2017. Passive smoke exposure, particularly during childhood, also contributes to significant risk for COPD.
An estimated 20% to 40% of people with COPD worldwide are never smokers. Respiratory infections (bacterial and/or viral) further contribute to the onset and progression of COPD (see section on Acute Exacerbations below). About one-quarter of older Americans report a prior pneumonia, and those older than age 75 have a 10-fold increased rate of influenza hospitalization. Outdoor air pollution (especially fine particulate matter) is another major COPD risk factor worldwide. In 2009, 34% of older
Americans lived in a major city, a surrogate for exposure to outdoor air pollution; wildfires related to climate change now pose an increasing threat. Other COPD risk factors include HIV infection, heroin use, and occupational exposures to dusts, vapors, fumes, and gases (eg, among freight, stock, and material handlers, and construction, metal and wood workers, miners, administrative support and information industry workers, healthcare workers, and others). Importantly, household air pollution related to use of biomass fuel for indoor cooking or heating is another key risk for COPD worldwide
—the prevalence of these exposures are 12% and 18%, respectively, among older nonsmoking Americans, and is significantly higher (leading to an estimated 2 million deaths in 2017) in low- and middle-income countries.
Notably, not all COPD results from exposure to environmental agents during adulthood causing accelerated lung function decline. It is now clear that various trajectories of lung function exist; a substantial proportion of individuals develop COPD after having failed to reach optimal lung function early in life. This relates to several factors, including in utero exposures, bronchopulmonary dysplasia, prematurity and low birth weight, impaired nutrition, childhood asthma, tobacco smoke exposure and frequent childhood respiratory infections, and genetic predisposition (most notably α1-
antitrypsin deficiency). Environmental exposures later in life can further compound the risk of developing COPD in these persons.
PATHOPHYSIOLOGY OF THE AGING LUNG
Healthy aging is associated with structural and functional changes in the lungs and chest wall. As shown in Table 81-1, the aging lung is characterized by a reduced physiologic capacity, with multiple respiratory impairments that increase the vulnerability for developing COPD, including respiratory failure. Aging-related changes in the bony thorax including kyphosis, increased convexity of the sternum, stiffening of the ribcage, and decrease in respiratory muscle mass lead to a progressive increase in the rigidity of the chest wall (decreased chest wall compliance) and reduced curvature of the diaphragm, with resultant alterations in respiratory mechanics. Concurrently, in the lungs, degeneration of elastic fibers and alterations in collagen around the alveolar ducts result in homogeneous airspace enlargement, with a reduced total alveolar surface area but in the absence of alveolar wall destruction (“senile emphysema”). Surface tension is reduced due to
increased alveolar size. These changes lead to decreased elastic recoil of the lung, further associated with a reduced diameter of the small airways.
Collectively, these respiratory impairments lead to (1) airflow limitation: defined by a decreased spirometric ratio of forced expiratory volume in 1 second (FEV1) to forced vital capacity (FVC); (2) air trapping and
hyperinflation: defined by an increased residual volume (RV) and functional residual capacity (FRC), respectively; (3) decreased total lung capacity (TLC); (4) reduced maximum breathing capacity (MBC): defined by a decline in the maximal attainable minute ventilation, strongly correlating with a decrease in FEV1 of ~30 mL/y; and (5) ventilation-perfusion mismatch:
airway closing volume is increased (it approaches the FRC); this leads to premature closure of the small airways and widening of the alveolar-arterial oxygen gradient. Other age-related respiratory impairments that increase the vulnerability for developing COPD, including respiratory failure, relate to reductions in pulmonary capillary density and increased stiffness of pulmonary vasculature, alterations in the airway epithelium including reduced mucociliary clearance efficiency, decreased respiratory muscle strength, and reduced cerebrovascular responsiveness to carbon dioxide (CO2) (tightly linked to the CO2 ventilatory response). These age-related
changes in the respiratory system also result in increased expiratory flow limitation, altered respiratory pattern (higher respiratory rate and lower tidal volume), increased dead space and increased work of breathing during exercise. Additional cellular and molecular changes in associated with aging include decreased lung progenitor cell populations, alterations in innate and adaptive immunity (immunosenescence) and other cell senescence, decreased proteostasis, altered mitochondrial function, reduced telomere length, and increased proinflammatory cytokines and oxidative stress. Collectively, these changes render the aging lung more prone to injury such as that conferred by exposure to tobacco smoke, with decreased capacity for tissue repair. These issues will be considered further below.
TABLE 81-1 ■ RESPIRATORY IMPAIRMENTS OF THE AGING LUNG AND COPD
HISTOPATHOLOGY OF COPD
COPD is a heterogeneous disease characterized by several distinct histopathologic features, involving structural changes in the conducting airways and/or alveoli. The relative contribution of these features varies among individuals, and the differing features often coexist. The most
important structural changes in COPD include (1) mucus hypersecretion and large airways inflammation, (2) inflammatory remodeling and fibrosis of small distal bronchioles 2 mm in diameter (or less), and (3) emphysema.
Chronic bronchitis is defined as chronic cough and sputum production for at least 3 months per year for 2 consecutive years. Mucus hypersecretion is the hallmark of chronic bronchitis. These clinical features result from goblet cell hyperplasia in large and small airways, mucociliary dysfunction, and persistent neutrophil-predominant large airway inflammation. Mucus hypersecretion worsens airflow obstruction and predisposes to bacterial infection. Bronchiectasis, which is present in up to 30% of patients with COPD, is an additional mechanism that may underlie chronic cough and sputum production. A second key histopathologic feature of COPD is remodeling of small bronchioles < 2 mm in diameter. This is characterized by airway wall inflammation (CD8+T cell- CD20+B cell- and macrophage- predominant) and intralumenal mucus-containing inflammatory exudates as well as thickened airway walls and peribronchiolar fibrosis; collectively, these processes narrow the airway lumen. Emphysema, or loss of intact alveoli with enlargement of alveolar spaces, is the third key histologic feature of COPD. Emphysema may be centrilobular (the typical pattern related to tobacco smoke exposure), panacinar (typical in α1-antitrypsin
deficiency), or paraseptal in distribution. The distribution and severity of emphysema vary, and may worsen over time. A decrease in the number and total cross-sectional area of terminal bronchioles appears to precede emphysematous destruction. Some individuals develop “bullae”—large air- filled sacs greater than 1 to 2 cm in diameter.
PATHOGENESIS OF COPD
The pathogenesis of COPD is incompletely understood, especially in the context of the aging lung, varying environmental exposures, and different trajectories of acquiring the disease. The long-standing paradigms of COPD pathogenesis generally fall into three categories: (1) antioxidant/oxidant imbalance, (2) antiprotease/protease imbalance, and (3) inflammation. Early focus on these issues resulted from compelling animal models in which candidate gene identification included α1-antitrypsin (α1-AT), macrophage
elastase, surfactant D, microsomal epoxide hydrolase, Nrf2, matrix metalloproteases, as well as cathepsins.
Most of these studies addressed the early, initiating phases of COPD development. However, COPD develops over decades despite smoking cessation, thought to be due to “progression” and “consolidation” phases of disease in genetically susceptible hosts. Table 81-2 summarizes current understanding of COPD pathogenesis, in which, in addition to the three key mechanisms noted above, accelerated aging, aberrant immune responses, dysregulated tissue repair, and chronic viral infection have emerging roles. Genome-wide association studies have identified a wide array of genetic associations for COPD susceptibility and phenotypes beyond α1-AT
deficiency (see Further Reading).
TABLE 81-2 ■ SUMMARY OF COPD PATHOGENESIS
Aspects of Biologic Aging Also Found in COPD
Telomere length, which is a biomarker of cell aging, is shorter in lung cells from COPD patients compared with age-matched individuals without COPD. Genetic mouse models of accelerated aging also show chronic lung disease. Mutant Klotho mice, which have a shortened lifespan and manifest signs of
accelerated aging, exhibit emphysema despite normal lung development. Expression of the NAD-dependent deacetylase, SIRT1, a member of the sirtuin family that is implicated in caloric restriction-mediated lifespan extension and proinflammatory signaling, is decreased in emphysematous lungs. Cellular senescence and apoptosis are two age-related processes that also occur in COPD. Cellular senescence is increased by oxidative stress (such as that induced by tobacco smoke exposure and inflammation).
Senescent cells can further active proinflammatory processes by activating the transcription factor NF-κB. Recent research has demonstrated that microRNAs (small noncoding RNA sequences) may play a role in cell senescence and impaired phagocytosis of apoptotic cells. Multiple studies have documented increased apoptosis in the lungs of patients with COPD and the resultant imbalance between apoptosis and tissue repair may underlie the development of emphysema.
Recent reports have identified innate immune molecules to be key regulators of age-related emphysema. There is functional overlap between immune responses to environmental and to invasive pathogens. The innate response constitutes the first phase of the immune response and is triggered by pattern-recognition receptors. Three main classes of recognition receptors activate the innate system: (1) the membrane-associated toll-like receptors (TLRs) recognize microbial components and ligands from damaged cells, (2) the NOD-like receptors (NLRs) are activated when stimuli enter the cytoplasm, and (3) the RIG-I-like receptors (RLRs) mediate cytoplasmic recognition of viral nuclei acids. Because immunosenescence contributes to susceptibility to infection and decreased vaccine responses in older adults, significant emphasis has been placed on immunology in understanding how aging affects the innate immune system. The roles of innate immune receptors in age-related lung disease have been reviewed elsewhere (see Further Reading).
Age-related loss of endogenous, protective immune molecules such as macrophage migration inhibitory factor (MIF) has recently been identified as a mechanism of emphysema in smokers and corroborated in mouse models.
MIF exerts pleiotropic protective effects by modulating cellular senescence molecules, antioxidants (such as NRF-2), apoptosis, and key growth factors (such as VEGF). The loss of trophic factors (such as VEGF and Wnt signaling) found in lung samples from COPD patients are thought to lead to alveolar destruction and, potentially, the disappearance of the terminal
airways in COPD. In addition to enhanced tissue destruction, the role of lung cell apoptosis has been highlighted by multiple studies. Enhanced cell death and inadequate regenerative responses including stem cell exhaustion, defective DNA repair, and decreased autophagy (removal of degraded proteins, damaged cells, or foreign pathogens) are pivotal to the progressive loss of gas exchange surface area in emphysema.
Most studies of COPD pathogenesis have focused primarily on emphysema rather than airway-predominant COPD and chronic bronchitis, but at least some of the same processes are likely involved (Figure 81-1). The paucity of animal models for chronic bronchitis has been a barrier.
Mice, for instance, do not exhibit significant airway mucus secretion or airway remodeling typical of chronic bronchitis despite chronic cigarette exposure. However, studies of IL-13, VEGF, and IL-18 lung-targeted overexpressing mice in combination with clinical evidence of increased levels of these cytokines in smokers’ lungs have provided new venues for further study.
FIGURE 81-1. Cellular processes determining chronic bronchitis and emphysema.
Physiologic Impairments Resulting from Structural Changes in COPD Several physiologic impairments result from the structural changes in the airways, alveoli, and chest wall in COPD. The respiratory impairments of COPD overlap with those of the aging lung, although in the presence of COPD (disease), they occur to a more severe extent (see Table 81-1). Each
of the structural changes in the airways (airway wall inflammation, mucus, bronchoconstriction and/or small airway remodeling) and/or alveoli (emphysema) results in obstruction to the flow of air. Other key physiologic impairments include air trapping and hyperinflation and gas exchange abnormalities.
Chronic airflow obstruction is most often established by serial spirometry (confirming limited reversibility). The major site of airflow obstruction in COPD is the aforementioned inflamed and remodeled small airways less than 2 mm in diameter. In addition, emphysematous destruction of alveoli increases airflow obstruction, through loss of alveolar attachments (and in turn loss of outward tethering on small bronchioles) and a decrease in the elastic recoil of the lung. Bronchoconstriction is another contributor to airflow limitation, particularly during acute exacerbations of the disease.
Chronic airflow obstruction makes expiratory alveolar emptying increasingly difficult, resulting in air trapping and lung hyperinflation (measured clinically by helium dilution or whole-body plethysmography). Hyperinflation may be present at rest, particularly among those with severe airflow obstruction. Most individuals, even those with mild airflow obstruction, develop dynamic hyperinflation with exertion, when faster respiratory rates lead to insufficient time for exhalation to baseline end- expiratory lung volume. In combination, these impairments adversely affect
(1) breathing patterns: flow and volume limited, with an increase in the intrinsic positive end-expiratory pressure (PEEPi); (2) respiratory muscle strength: the curvature of the diaphragm is reduced, altering length-tension relationships and thus decreasing the force generating capacity of the diaphragm; and (3) maximum breathing capacity (MBC): the maximal attainable minute ventilation is reduced, strongly correlating with a decrease in FEV1. The net effect is exercise intolerance, symptom-limiting dyspnea,
and an increased risk of respiratory failure. Of note, anything that increases respiratory rate or work of breathing, including anxiety, hypoxemia, pain, or concurrent congestive heart failure (CHF), can lead to dynamic hyperinflation and significant worsening of respiratory symptoms. Additional common symptoms in COPD include cough, sputum production, sense of chest tightness, sleep disturbances, and fatigue. Sleep disturbances may include nocturnal hypoxemia, alveolar hypoventilation with increase in PaCO2, and/or concurrent obstructive sleep apnea (OSA).
Gas exchange abnormalities in COPD relate to ventilation-perfusion mismatch, owing to lung regions with a low ventilation-to-perfusion ratio or an increase in dead space (ventilated but underperfused)—these abnormalities relate to chronic airflow obstruction (leading to low V/Q areas) and alveolar-capillary destruction and air trapping (leading to high V/Q areas). Clinically, gas exchange abnormalities may be identified initially by a reduced diffusing capacity of the lung for carbon monoxide (DLCO) (adjusted for hemoglobin), suggesting emphysema when coexisting with airflow obstruction and hyperinflation. The reduced DLCO reflects loss of intact alveolar capillary surface area for gas exchange. In advanced COPD, based on the relative contributions of impairments in respiratory mechanics, respiratory muscle strength, and central chemosensitivity (see Table 81-1), gas exchange abnormalities may progress to hypoxemia and hypercapnia, yielding a reduction in arterial oxygen content and an acute/chronic respiratory acidosis—these abnormalities define the presence of respiratory failure.
In addition, destruction of the alveolar-capillary interface with loss of pulmonary capillaries and hypoxic pulmonary vasoconstriction (eg., resulting from resting, exertion, and/or sleep-related hypoxemia) and other mechanisms can lead to pulmonary hypertension and right ventricular failure (cor pulmonale). Moreover, ventricular interdependence (right ventricular pressure and volume overload) and marked intrathoracic pressure swings (increased work of breathing, hyperinflation, air trapping, and PEEPi) can decrease left ventricular preload and increase left ventricular diastolic dysfunction, ultimately reducing the cardiac output. The latter amplifies the ventilation-perfusion mismatch of COPD and reduces systemic oxygen delivery, and volume overload may develop, thus worsening exercise intolerance and increasing the risk of respiratory failure.
COMORBIDITIES AND COPD
COPD is also associated with several extrapulmonary impairments and comorbidities. Those involving the cardiovascular and musculoskeletal systems, in particular, overlap with those of normal aging (Table 81-3). Cardiovascular disease is one of the most common comorbidities seen. Hypertension, coronary artery disease, peripheral vascular disease, atrial fibrillation, and CHF (systolic and diastolic) occur frequently and in varying
combinations. An estimated 40% of persons with COPD who require mechanical ventilation for hypercarbic respiratory failure due to COPD exacerbation have some (systolic and/or diastolic) left ventricular dysfunction. Skeletal muscle dysfunction, present in approximately 30% of people with COPD, is characterized by changes in muscle fiber structure and function (including reduction in type I endurance fibers with reduced oxidative enzyme capacity) that lead to reduced muscle mass, strength and endurance. Sarcopenia is present in an estimated 22% (up to 63% among those in nursing homes) and is associated with frailty. Older adults with COPD have an estimated twofold increased risk of frailty. Other common comorbidities include anxiety, depression, osteopenia and osteoporosis, arthritis, OSA, metabolic syndrome, anemia, respiratory infection, and lung cancer. Multiple pathogenic mechanisms underlie these comorbidities, including systemic inflammation, immune response, anorexia, deconditioning, and other factors. Importantly, several comorbidities typically coexist, and comorbidities increase the symptoms, functional disability, hospitalization, and mortality risk above that conferred by COPD alone. Clinicians should routinely assess individuals for these comorbidities.
TABLE 81-3 ■ CARDIOVASCULAR AND MUSCULOSKELETAL IMPAIRMENTS OF THE AGING PERSON AND COPD
Accordingly, the impairments imposed by advancing age and COPD (see Tables 81-1 and 81-3) collectively increase the risk of having respiratory symptoms (dyspnea, chronic bronchitis, and wheezing), exercise intolerance, disability, hospitalization, respiratory failure, and death.
DIAGNOSIS
The diagnosis of COPD requires a compatible history (symptoms and risk factors), and spirometry to confirm the presence of airflow obstruction.
Given the heterogeneity of the disease with regard to symptoms, structural changes in the lungs, comorbidities, exacerbations, and disease trajectory, a comprehensive approach to patient assessment (see Table 81-4) and management is needed.
TABLE 81-4 ■ COMPREHENSIVE ASSESSMENT OF THE COPD PATIENT
Epidemiologic Surveys
Epidemiologic surveys of COPD are often based on symptoms of chronic bronchitis or physician-diagnosed emphysema/COPD. Prior work has shown, however, that two-thirds of persons who have symptoms of chronic bronchitis and half of those with physician-diagnosed emphysema/COPD have normal spirometry (do not have airflow obstruction). This relates to overlap between symptoms of chronic bronchitis (cough in particular) and those of other conditions (postnasal drip, acid reflux, etc) or being drug- induced, and to low utilization of spirometry in primary care settings.
Moreover, patients may underreport symptoms, due to assumptions that dyspnea, fatigue, and exercise intolerance may be due to aging alone. Also, many alternate conditions lead to symptoms similar to those of COPD, including but not limited to other forms of lung disease, cardiac disease, dynamic upper airway obstruction, pulmonary embolus, and others. As a result of these issues, COPD is frequently both under-diagnosed and misdiagnosed.
Spirometry Given the above concerns, and because pathologic confirmation is invasive and not routinely necessary or available, the clinical strategy for establishing a diagnosis of COPD is based on spirometry-confirmed chronic
airflow obstruction (less than fully reversible). Spirometry can be performed conveniently with a portable, handheld device, using protocols from the American Thoracic and European Respiratory Societies (ATS/ERS). These protocols require performance of at least two FVC maneuvers that meet ATS/ERS acceptability criteria—FVC maneuver is defined as the maximal volume of air that is exhaled with maximal effort starting from maximal inspiration.
The most common spirometric measures of interest are the FEV1 and FVC, and the ratio between the two (FEV1/FVC). An additional measure is the forced expiratory volume in 6 seconds (FEV6), used as an estimate of FVC. Although the FEV6 is more reproducible and less physically demanding
than the FVC, it is limited when distinguishing normal spirometry from a restrictive impairment, and predicted values are currently unavailable for those older than age 80.
Establishing airflow obstruction A reduced FEV1/FVC ratio establishes the presence of airflow obstruction. However, given that healthy age-related airflow limitation is also characterized by a reduced FEV1/FVC, the
threshold that establishes airflow obstruction must account for normal aging, including increased variability in spirometric performance (at least 50% greater in healthy 80-year-olds than healthy 40-year-olds).
To account for age-related changes in lung function, the ATS/ERS recommend that airflow obstruction be established by a lower limit of normal (LLN) threshold for FEV1/FVC. The ATS/ERS approach currently
defines the LLN as the fifth percentile distribution of reference values, calculated in a population of healthy never-smokers matched for age, height, sex, and ethnicity (these predict normal lung function). In the United States, reference values are often based on equations from the Third National Health and Nutrition Examination Survey (NHANES III). Although these account for age-related declines in lung function, the NHANES III equations are limited to those age 80 or younger, and do not consider variability in spirometric performance. Hence, the GLI has recommended that the LLN be instead defined as the fifth percentile distribution of Z-scores (Z-score of –1.64) (Table 81-5). The GLI calculated Z-scores account for the age-related decline in lung function and increased variability in spirometric performance and use reference equations that include Americans and many other ethnicities (worldwide), as well as age range of up to 95. Prior work has
shown that airflow obstruction defined by an FEV1/FVC Z-score less than –
1.64 is associated with respiratory symptoms, impaired mobility, frailty status, COPD hospitalization, and mortality. Moreover, Z-scores have a strong clinical precedence, given their use in bone mineral density testing and pediatric growth charts.
TABLE 81-5 ■ SPIROMETRIC CRITERIA FOR ESTABLISHING AND STAGING CHRONIC AIRFLOW OBSTRUCTION
Assessing the severity of airflow obstruction After establishing a reduced FEV1/FVC: Current practice stages the severity of airflow obstruction based on FEV1 expressed as percent predicted (%Pred): [measured/predicted] ×
100%. The ATS/ERS recommend the thresholds for FEV1 of 70 and 50
%Pred when staging airflow obstruction (see Table 81-5). This approach has age-related limitations, however, since %Pred assumes incorrectly the equivalence of spirometric variability across the adult lifespan. To address this limitation, GLI-calculated FEV1 Z-scores may be used to stage the
severity of airflow obstruction (see Table 81-5). Prior work has shown that Z-score thresholds for FEV1 have a graded association with respiratory symptoms, COPD hospitalization, and mortality.
The Global Initiative for Obstructive Lung Disease Despite age-related limitations in methodology, the most often used spirometric criteria for establishing and staging airflow obstruction are based on GOLD Report (see Table 81-5).
The age-related limitations of the GOLD Report include two fundamental flaws. First, GOLD defines a reduced FEV1/FVC by a fixed ratio less than 0.70, thus failing to distinguish between age-related airflow limitation
(normal aging-related process) and COPD-related airflow obstruction
(connoting the presence of disease). In particular, an FEV1/FVC less than
0.70 is frequently seen in otherwise healthy, asymptomatic never-smokers who are age 65 or older. Second, GOLD expresses the FEV1 as %Pred, thus failing to account for age-related variability in spirometric performance. It is
therefore not surprising that the prevalence of airflow obstruction in older
persons is more than twofold higher when defined by GOLD versus a Z- score approach (37.7% vs 15.4%, respectively). Prior work has shown that the overdiagnosis of airflow obstruction by GOLD is not associated with respiratory symptoms, exercise intolerance, impaired mobility, COPD hospitalization, or mortality. Third, several conditions such as kyphosis, weak respiratory muscles, obesity—all common among older adults—can lower the FVC, hence reduce the FEV1/FVC ratio, and underlying COPD
may be “masked” in such individuals.
Bronchodilator reversibility COPD is defined as chronic airflow obstruction that is less than fully reversible. However, up to two-thirds of individuals with COPD may have a partial component of bronchodilator (BD) reversibility. Clinical guidelines therefore recommend that spirometry should be performed before and after administration of an inhaled BD (pre- and post- BD, respectively) in order to establish (1) reversibility (pre- vs post-BD values) and (2) diagnosis of COPD (using post-BD values). Among older persons, however, this approach has two disadvantages. First, older persons
have limited capacity to perform multiple FVC maneuvers, and may have an adverse response to an inhaled BD. Second, post-BD values have limited clinical relevance in distinguishing COPD from asthma, and have low reproducibility over time. Hence, the use of pre-BD values provides a reasonable approach when diagnosing and staging COPD in older persons where necessary, with limited reversibility best established by serial spirometry over time.
Additional Considerations
Although chronic airflow obstruction is currently considered a necessary and sufficient criterion for defining COPD, there are additional problems with this approach. First, individuals with symptoms of chronic bronchitis and history of COPD-relevant exposures and other risk factors may lack spirometric airflow obstruction, yet have lower exercise capacity, increased risk of disease exacerbation, development of impaired lung function over time, and increased mortality risk. As such, these persons should be monitored over time. Second, as noted above, other individuals with COPD- relevant risk factors have preserved normal FEV1/FVC ratio (owing to
various factors), yet have radiologic features of COPD (including emphysema and/or chronically inflamed airways). Those with normal FEV1/FVC ratio, with low FEV1 and evidence of inflammatory airways
disease are at risk of disease progression. The term PRISm (preserved ratio, impaired spirometry) is used to define this group of people.
Given the limitations of spirometry alone in establishing the diagnosis of COPD, some health care professionals advocate for an updated, alternate schema for establishing the diagnosis of COPD, in which symptoms, relevant exposures, spirometry, and radiologic (CT scan) features all play a key role (see Further Reading). Use of such an approach leads to establishment of the diagnosis in a much larger population of people as compared with use of lung function testing alone. It would also have major implications for identification/selection/inclusion of participants for clinical trials of therapeutic interventions in COPD. Further research is needed to determine the impact of current therapies for COPD among those with chronic bronchitis and/or radiologic features of COPD without airflow obstruction, and those with PRISm. Currently, the diagnostic evaluation should assess additional respiratory impairments, as these further confirm COPD and are associated with adverse outcomes (Table 81-6).
TABLE 81-6 ■ RESPIRATORY IMPAIRMENTS THAT ARE COMMONLY EVALUATED TO ESTABLISH A DIAGNOSIS OF COPD
Lastly, several subgroups (“phenotypes”) of COPD with distinct features are recognized (Table 81-7). While the pathobiologic processes underlying these different groups is incompletely understood, combining the physiologic respiratory impairments with clinical and radiologic features further establishes clinical phenotypes and informs more personalized COPD- related treatment options and prognosis (Table 81-7). Additional assessments that should be made routinely include measurement of peripheral blood eosinophil count and serum α1-AT level (and phenotype), because
these features both characterize patients and inform specific aspects of disease management.
TABLE 81-7 ■ COPD CLINICAL SUBGROUPS (PHENOTYPES)
CI.INICAL
PHENO-TYP�•
-
AfAWA.V PIIEמOiMIMI\NT
זיREATMENT"
AiזPמ�,; ob�tructio11 | irflo\'I' (ll!_גMrיוגc,ioת, \liitlנntוl cli\יrזirוrc brטrר, !1iti�. brnnch1e:,1'$ ··,,or eחןp.h.ץ <.1trו<'!. - t smבll :�1זw.;1י,• wגll 1l1ii:kness on vo!um�trle clנest eompטlיed t<l' Lmו,gl'i'lphy :.iri�bleי revmilכilנty, but t:i.ו:b rom�!eגturיes oנ'<:OJיD-��hmג owr�p tז:יg, | �r-011�!1Qd i�lחנ·, iגחti-iחfhuוimiגt rץ | |
sp1.1mנn e<נ�inophilו!I, !וigh l' | gE) | ||
Clנ:n;נ11ic !Jronthiti:s |
| flr'Qncl11נdilator. <1חli-1חח, תו,matorץ.',. ::t 1] ו:cוכ!�•tia' | |
l!rnnchi��וe�k |
p11\�d !nrnngwphy | Br,oridרcוdil.itor ai ·' , :,יי•clCQr\\llCCteו.:hn,qu | |
GOPD-�cl1111aה וכ"Vt.sD]�p |
| D.ז<1r,כר.od:il!וW.».', ;:וו1ll•iו1fi,uוז111"1ury,' םו:insirJcr b!olog!-:s | |
EMP�יiStM.A PR�DOMINI\Nז
Empll)'�יC11דa
C ,1וblncd fJJ lנוio, nary fנ.b �i&י' emןכפ:L)ייEJנ�a.�YJI•
droנרוe (CPBFS}
סזfll.R
Airfuכ,\'!I OO�fי!.l(tioג1. \,;;itlד al\·�,Waג'--(::!pillriזr di;, trQ(fwוt aווd 8.i!'3p�GC
enlaוך,�ננlil"11l
Mo,derote-ro,..�r� hypctiוl!tlQ!i.Qנבaווd · ir tropPי111,g, red 1.ced DL
((l�ju�tו::d rorhcוזt,גglrו-bi1ך), piוlmon�ryיו1ypcrיtc1ו.qin1�. mplryi;crור.ו nr1 vol
uוונ(יtri t'lרAג;1 rempul.cd t(.lו11Qgןrn.pll)'
SיCc\lיl.'!1'·�dyspנוe t1nd ��Kis.ב in.klJcmn�, lr.וw BMI
נnclu:d bodו upf-.e ·J<גb�eיוf!1fיll �ןןa .i:זtd IQ'l.,.cr-נo'beft1.»יos:I uallיy occur,;- m m le ;,נ11okcr and ;\w.:, i.נ:t� ,..;th pclmoווfl.ry hyperw.ונ�m,,
.1-cutc lu� 9JU1rf, �rnl !ש1g c.1ncer,
Br()Jך Jו.oי1.lit:aw ,
.nוl i•iווfuנ.רבתן.נtory,•
lung VOl(1me udt ion r
lr.JiilSpl:ווll tשn Stוrג;,ו:ז ,r a,•i111דliprot��et'
BM,1 lwdll!i\,tQ• t111f\-irni]iר1זו,וn:ו1nr.,< 1.1�1\{.srיי
(' J{;!Q6ח} ;lt0-r l1roltl1-relו1
CUP •OSA
COP 1,..ill1
1.יosJ tו,o,plוili.ו.
Airfuכ,\'י' ol..,�tfl!ו.«iooi
VBrik'lblc oomput(.\l.i tornogmph:,י [�11:tuw;
»lטod י»-5iושphi.l levcls > 200-300 י:i:ll�ןגl
BI()Jך -lו.odi
וו.ג l11fhו111n1!;גury(ICS ,)
consi.der מbנio gגc�
F'rי.qיt.u.cווt
D{.יf/וו�d bץ two ur ונש«·t ווגp1 111-ik.<fוnt·.d:('J(:נ. :.i!rhuiiwוa$ �r
--------+-------
1:ei;-tqu"נitj:י ofllfiו וtd, ccctL'יrונ,ted liccline, in lu.1וg:fuil�tloנג
Br-w1 lר.otiitatQ-r,
.anti•iווllmו.tm;גtorfי"
\\/ti,rSe aiX:tum.וl hypcxo::mi;,/hyp�r,.-.pni�זl1�m <.;Ql�l) .1חd. ·, .ו]1ine
------+----
ilif\יra)' pr�5u«:may
Iנ-o�iti�·i:
0•\�,נ�p
Sזlleriו. nf irllO\>'ob�lru,ו imו �nd hypו:;riחftntinn «ir1:dab.'.I.Wiוlן OSA
se,•tti,t
iמרץro\'1!s�rv.i"'31, pulmon:iry hyptrl�n�i11יn. hypo�(!Jת\,:ו:,1111.d rנ�k <.גflוo, /ti)I tion
lכka כ,di!T... ing� 11:ז" f 1h1 lu.מg o,r...bao nו<>n.,.i,,k;
יJ.ו•uG,11,.נ lי<1Wו 1�• •ltנ prooorזר1UiiUוד itirw ג plדc,וע,! e,pגוio ,�nג
m:p, םp
nruנוif'ltי fraזu� C &•ooaוmr..נ
jrilio.."' f'I ו>;ו.וS<י .נ,11l.ני-""Lינ'י "גt<loי'נ..tuoor .)
�•1;:, �r..,kת inhה ltי! ooffioo>ll'לW< at,\I ndi<'ltי/ iור k'W, olrfl"" Dt>J.lrooוlo<ינiו'וdווt,l,; l>
L 'ם l סbleCCJיO ,.,.(�.\bתmml םוio" lror i
B וו d cר>l'lll""�•ת ו:,o;,ד r:neוll """1
iו.dנ'<י•r (r,,'f'J ,:תl גדוt>-יי•;;,t
' CnPבlm aו> e r iתdoo, "1 iוu111ibut סra11GK 1וle "'1L iסl!i
' lזר 1 וו&ו1:ו (i\g, N-><ety!,:,.sו, f.,,.J,1ו<נt n,i...נו bמ nש 'iPll.ium pmd u.י:ll rafr l}' וo נcו<>daro. ""l'Y.
'ln•�lNic,! J'"I� • (11"'�t�ונ�"l!'l<i,jt � l<m ,)
.on,.._-le, <נ1,an lr11גוyןו j def1.:lm ,וlו i1, ,JDOdוn-נו m Qו ,mF¾ o111'ו,«:d m,Jj,,J.טlן· II a� � 4"- ו/a וmol::.nשוr�·rmln ומנ l ו>l<>,r,.ונוו 111 l
\!iiirit-,, ו ii,n �� i,tוג Q'jJ
ir י1י<1<�i נ,1 " ll" •יni ו,ו1�n· �r lli� ,11ן..irw,Jlיו{ •i1 י<ז <li.- ק• ,d/\j, lf י faוr,11y ,x� fqf ויn) •�or 1-זr.Iiltf.t<ז',...;1m � n \ "י'iיl l>י1. ח
l תt""'"'"""<L'l'f'l•n,_,.,. "'" w1t lו l'י"'INI וhע1'111 -""1� ""1,....,,6ס1 ,.ו.,,MI � � lt\!:h-oo 1'l1<. יt>0L'W1' IP ו rt וב.תA ג. 1""" � 1 וrllow<וbot,וnאM •
,11 lr�� 1!,\rc-r;.r,m•r" prow �11<1 ".&(>i��•
,וי-111 fעr ilוs- lib-.Hיט,jr. ו.ו.Jיt li.fה.וlKiו'I •י�יKfr i11,iגi ,1 kו1 fוHI i11·1ו זiי tr�נ,י r lןijod.-�li.l)וjlli.ו.ו t-m-lil�,i'I wו.ו::ו.ו11
CLINICAL MANIFESTATIONS
Because of shared risk factors (eg, tobacco smoke) and sequelae of progressive chronic disease, COPD is often associated with a high burden of symptoms and medical issues, as discussed above and as shown in Table 81-
8. Typical symptoms of COPD include dyspnea, cough, sputum production, wheezing, fatigue, activity limitation, and sleep disturbances. The clinical course of COPD is further characterized by acute exacerbations that cycle between chronic and acute care settings, leading to accelerated declines in lung function and increases in medical burden (these are discussed further below). Importantly, spirometric measures and FEV1 alone are insufficient to
characterize the impact and burden of COPD on patients. Overall impact and prognosis of COPD are informed by the BODE Index, which includes the body mass index (B) as a measure of nutritional status, FEV1 as a measure of
the severity of airflow obstruction (O), Medical Research Council score as a measure of the severity of dyspnea (D), and 6-minute walk distance as a measure of exercise (E) capacity.
TABLE 81-8 ■ MEDICAL BURDEN OF COPD
Advancing age additionally increases the likelihood that comorbidities unrelated to chronic airflow obstruction will complicate the clinical course of COPD. For example, a postmortem examination of the cause of death in 43 older persons hospitalized for COPD exacerbation included heart failure in 16 (37%) and pulmonary thromboembolism in 9 (21%). The age-related predisposition for polypharmacy may also adversely affect COPD: (1) opiates and benzodiazepines may decrease ventilatory control and increase
the risk of aspiration; (2) statins may have a myopathic effect on the muscles of respiration and ambulation; and (3) β-blockers may exacerbate underlying heart failure and bradycardia. The interaction between these adverse medication effects and age-related reductions in physiologic capacity, including respiratory and cardiovascular impairments (see Tables 81-1 and 81-3), may contribute to pneumonia (28%) and respiratory failure (14%), as additional causes of death among older persons hospitalized for COPD exacerbation.
Lastly, advancing age is characterized by multifactorial geriatric health conditions, such as cognitive and physical impairments (including delirium, balance impairment, and injurious falls), sleep disorders, incontinence, frailty (sarcopenia), malnutrition, and chronic pain and dyspnea. These frequently represent sequelae of multimorbidity, polypharmacy, sedentary status, psychologic disturbances, and social isolation. Importantly, polypharmacy and complex treatment regimens (which may include oxygen and/or noninvasive ventilation) may impact adherence to medications, in turn affecting disease control. Thus, a comprehensive approach is needed when managing COPD, particularly in older persons.
MANAGEMENT OF STABLE COPD
The goals of care for stable COPD are to relieve and minimize symptoms, optimize exercise/activity tolerance and health status, and reduce the risk of exacerbations, disease progression, and mortality. To achieve these goals, decisions regarding treatment strategies should consider the predominant clinical phenotype of COPD (see Table 81-7), including the severity of corresponding impairments (see Tables 81-3 through 81-6), as well as the accompanying medical comorbidities (see Table 81-8). Ultimately, treatment options are calibrated to patient preferences, needs, and advance directives, and appropriately modified when the clinical trajectory progresses to advanced COPD.
Pharmacologic Therapies
Treatme nt options Medication regimens should consider the patient’s skills/abilities, peak inspiratory flow (PIF) rate, the questions in Table 81-9, and the algorithm in Figure 81-2. Prioritization is given to once-daily administration, self-contained inhaler devices, educating the
patient/caregiver on medication administration, and ensuring the patient has adequate PIF rate to entrain the medication. Table 81-10 summarizes currently available medications, emphasizing benefits, side effects, and complexity. In stable COPD, pharmacologic therapy should follow a stepwise approach that for some patients may lead to use of triple-inhaler maintenance therapy, including two classes of long-acting inhaled bronchodilators (β2-selective adrenergic agonist and anticholinergic) and an
inhaled corticosteroid, as well as prn use of a rescue inhaler (short-acting β2-agonist). Additional options include roflumilast, macrolides, vibratory positive expiratory pressure (PEP) devices, and mucolytics. Some
individuals may also benefit from biologic monoclonal antibody therapies. A
stepwise treatment approach to pharmacotherapy used commonly for those with confirmed airflow obstruction, based on symptom severity and disease exacerbations, is provided in the GOLD Report (see Further Reading).
TABLE 81-9 ■ DEVICE SELECTION CONSIDERATIONS IN OLDER PERSONS WITH COPD
FIGURE 81-2. Algorithm for inhaler selection. DPI, dry powder inhaler; EXIT-25, a 25-item cognitive test (executive interview) that evaluates executive function (the higher the score, the worse the cognitive impairment); MMSE, Mini-Mental Status Examination (the lower the score, the worse the cognitive impairment); PIF, peak inspiratory flow; pMDI, pressurized metered- dose inhaler; SMI, soft-mist inhaler.
Short-Acting Bronchodilators Bronchodilators are a mainstay of the treatment of stable COPD, irrespective of clinical phenotype, as they achieve improvements in symptoms, exercise capacity, and airflow obstruction.
Inhaled bronchodilators are preferred over oral agents, based on efficacy and side effects, and aging itself does not reduce the bronchodilator response.
Notably, bronchodilators are effective in spite of patients’ fixed, irreversible airflow obstruction; they reduce dynamic hyperinflation and end-expiratory lung volume and improve inspiratory capacity and patients’ breathing pattern.
Albuterol is an inhaled short-acting β2-agonist (SABA) with a fast onset of action. It is usually the first bronchodilator used, serving primarily as rescue therapy (as needed, rather than regularly scheduled), and is
commonly prescribed as a handheld pressurized metered-dose inhaler
(pMDI). Albuterol is also available as a nebulizer, an alternative for older patients who have difficulties using handheld inhalers and/or who may benefit from the mucus-mobilizing properties of this therapy.
Patients/caregivers should be advised to always have albuterol readily
available in the event of acute worsening of symptoms (dyspnea) that does not improve with rest and use of slow, pursed lips breathing. Individuals with visual impairment should mark albuterol pMDIs to avoid confusion with other inhalers. Common side effects of SABAs are listed in Table 81-10, with concerns especially raised in people with COPD and concurrent cardiovascular disease (especially hypertension or tachyarrhythmias), coexisting asthma (mortality), and excessive dosing.
TABLE 81-10 ■ COPD PHARMACOLOGYA
�WKI-Mtlr«.iוNו�r.>Uf MQ!;ן-..Otיז�HoואtJ.TO ,t1,11ו ltדtMt't lil�l!l
lpוווש<יpנשוו
1 .•""'1'11• flfA)
rMl)J
Jp וlJ't ן�.tוlוw �
� 1ן1uגl!;lו1 ן!lן�j
1'•תוץ •rי
luוm, ע'11:H.- וב;ןp:1 inil
ז(l,w;ן,ol,: r"'1" 1חrי
a._�db,i
!
11�\ ן:id.וdו. .i וך' וזאו111:h.t.נ.1
\,uf1ןוh;נ.r� Hii,"111111-'1II
, , , 1 t,..נ.,ו" י<""'יI
�..1 יi"'ט'יt"'t"
E'rוו>><ו,ייwr l'<>l•>11D<: .J\J
rlolt....ו ו\� ,1
l�ןז;ז1111111
1.MJ-.rnl
ccנ 1Nז.D51j
�יי
ו KIIlm �� נlני,ג,
fllQII; • 5,IUl;lן llltO�TD!I
,י,וl.:!111, lוl'lpc,>'>•ח-
�•i "י''I' יי,ייי1Mtn, urw וv,,, ר:,,n,. ש 11kו i:uJ \1rr lil,ו ו � 11זY)�.
.ג..יןי--r4.:ב.�kוכ iיqw1t!1ia ן:טז
�&ד,..,ב;.;,lו ,...ן"
di ,,iםt;
(1rו11a k11>9111 ,. b n«\I111b<,op,:p,.-d 1n,d.
- �11 ll!!�'י�ג _pl.o.י,,I o4t,""'"'' ,,..
"'1nקr;11נ,..rt י> ir<I ....d ו,
'""' 111י!יd ;,� ד, d
\1\,, זי.rזo OC!i 11.כ
�� nd>עז,.נ.kllu1
LaH'G-�cחNת Jl.1!RlnlOI IIIJ�IC ll-)\MJ'l1 ll�lוlat!ID!iAדQ� Mג H:ויחiנו.יlוד'1'/תM.PI'
,f\drn1, r!llllidr tך� ו ן>flt'!''"' r•illtיr�
" ,ll.w.,1 l"r�rj i tl !1'�"1Qb�-נ.נ,· tnוriי,n,rdlוt "f'י'(IIJ,•lז
li uוi:K111idlונu1 wd11ח,םttייג
ו,t.,.,,.,
;ע>J.li<:וlוh,rn.וL>q:J,,.,.ננ�;111
-, !t[lliו)i)l--iענ.-, !l_llJI ןז<1,...-. ""' ...,.
"-� dft*-
1 1� דd,�dl"'�' r\ז•l)
i.:Juנ,1.:ilיl\1,.י,jia,t.j;
IN;,1r...נ,.,.l.�M nr-
� 111 ;ו1111 ו, d,,,
� b,,,qool'f1nr ltו נlר iח
� oo!Y .i, י1t1י,1mo• r
נhwldt:..lועnוllrוb,;.. Lwזl םilו
C•pm" t1כn1נlkd..j
U«ויגIJ\IJJW�ו
111!- .Fll•יfי'lI
G l•1,
Cl,inנi.ו\a .1.ו:.1!"8'נ
lttvma<וn
יייJ..... \1111•1\)••.,
olzנi7
.[נi>J
l rdיa, (11w t lוf'lי«'
� 1� 1
ו .i.ו.1Cנ.יi ו,:]� r ii:• J..תr
,:ח\/נ,;נlו:,
nbחl.1htilM>זt«tות�-· -
"ן
""
' ,dl.(.�i,,tן{�I• -י•n
�b...i,:,l,i,p�lאk.ז
.n •�,י>�•�,M,ו/
� �1 ..�r�
"א
vו." ..,mו� .,.i
� rnl idlי•lזtח \0011\\<ד
Vuh nccl lם b< רlרl'י.<יf"
ן'l'ן)<ונlווו
1 .זע.!
זב�וlןוh
ו1,., r �נ וr.,""' N!huJl,rן,,
!"" Uli'M uu o ;n.נi,111,p.,.:o
ro � N!D IAU ו!.M�A·MOllleיt OOll,.R.rD!l-iiMill'l't/l'"1וlfi' r&1W'7
'�ll;,,ו1«ol l.i,םJIוiגc nl. ז:נ,,,.,Uנ"' 111;,ן,w,tnkM•
ו:t�(-יi<F-MdJn �1 ' �
IAי<iוםוo Eנlנ,pJal נ i nbנ!..Itw...:,�ו1נ,1f ln� Ulli1111�lז 1,\/וי:דw<ז <1]�
...J'llllttt...dגזשן,סlm1 ך'
...ן.'.!.ב;�
Oio/ 11הV1 ו Tk/lLווtqpיu ומ &ענו., t, � IIUrnr<I • ו,t,,�l"'יll'/,IIJIII •1-
1>ב1>
linlי-, P!!':"i'!m>.lj 1"'11• ,.....,, �llן !>,]/IUul Jlo\ '
1, יl� CiOtUl�Q;SrרAOןlנ11�)-M,li�Atl(f flוlf!tגll'I'
t.ןlti
Oיי>(rג
t....,.ייי-1,
,, 1))
1 tו,וי,,, 'יי'"1[111
l1דוץ.ווו'ינז1i.fיז•11'C�lh•י1
(1111\i\wJo ווl b..ג1ו.11c1,iik,,I
.;1 ג1ו,;
1
1
f-1lוt�ו:יnו llr-a r.Joזי'<'il 111(,t
�......ו 1111 .,ג.
l1ז1r,(!1lt,Jן;.ll'<יtl ,·
• .J,,haבd..�1•r ,.ג.I,
...,� ו u...uilו .Jlס 1.1M'
id-יwאו•
,,,,lli
n:וi<Zdt.ilםn.•
r,,..,,. >i וl,cnrr ;.1111
W�lלj-וl lוו t ,:ז,NmJ•
��� mו�oכmוol�י
rז-11(1(' tזitr.י:IY'
r1r
וiנ ו;
rwi
l)f �
rri-;-t
(�
Flluמוזש ,
/1v�ו11י;.1
Av iDVDI,
1M.m:,1וו •ג
R.זun...�
�1!1'/\)
,;,n xw
שlגu( נ,l\Mbגil
l ת� illLן
pMDI
ו וJ.ltו!עlבנl(ו יזונד
גm:י
l lriו� l J [ 1
,.�..l)l
ו ו!י t ן,, l].Jfl
l
p!,JIJI
l "-'ilnl.ב.!:עii i1גltJ'an.כe lת
. ,1
1 i>,ווכ!\ ,oc-mו!rוl 1נר
COl'D. • 111 ו, k'I •ffil::tl"י'
,..,., r • ·11,�
1-ו:rtנtייטבו שb: ,... גו1.ונ ...
1 r �נ;;l'f'(l!lilOןl ll
...
+
....
u.wil iת ז-1Dוl. 1111"4"
ן'lי,..ן1 tי<וו
1'<- lגffurcיrזn l !lli<' 18" וf ונpl י<iיil lr> l �1<. Rlllll!' וJJ001� וn,.וf],,,
ו: ,ו1�ע,רןןUaו,dןf"'i�
l41ת11!! llt ו!'r uvי
P.-tm1; ixfivr�rזlL'IK'" l1f111נl
..,.,J n ו, d. , 1 in.Lda
!ש ;p,.יtdזtזי Kin
קuuו;,I, וJln "'"'י
ו,. . .,, .,ד�� ,""'�
"י'1tlliooft-rז
WMll�W L,י,M יכ� uי,ו�iיi11)11•
י'י'•י'"י"''"' • B./Jם..... ., " .....))\
lll�דfinWיr
,ו,.1.!u.ג... ''"י-·
, � 1rזr�•ridif1tי:�•n
, .. ·1,,Jiי<כ
t1rnנ{:ו t- ,--"י fiייj i'ii.וייcיוט
t"י'זתיb� ח1
rמi01'\וז1Lיי>i
--·�
p�IJli
�lf\'\ ו;ןו1ו
IIKlווdiJJ;jili!ffl <N ..!htl·
flK •lkil 4,א'N�fOl'וrtש·
ווw• •l!d •� l,שdm
c/1 <
••tl 1,pJ,mbf.(o.!)
,.,, t •וnb..וrrAlוmt}
� n. itlrנlשtt
11.נlt\«l!�<\וWנlil n
'l,r! םwי!זנdו»ם.ו:זם nוw �ו
נULt l1i<
l'rl..,l>e� tlחill•וarrזt 1\
(Du!,nן
.s.)1סח(jGI I Rur�
!מh.1וו,ב11!
DPג
111)
••• (lוt.pic,, •...-:!)
lr.� lilt\l :�וזl11,�� i.1-W�
i!rLKtיm /וn- iזfa• '"1Qi\l�
ות,,.�{ן..,
l& lת lו וlml I L�
\.W1�ט.ענ:
luI.r
qi�נ
נ ו,ן ,נ וk ,,נ.;, ....�ש:.tס
_,;,.!,, tתנ,�c.moסil»i'ID uu:
1,.,,11ז 1!8,� 1
ן tr,i.,
Vו lL<זוiעul 1 lי1ע1 זtי.nuו
[)Jlj
Riolw:nMJיal"'oJוlו u'י"
The anticholinergic ipratropium is a short-acting inhaled bronchodilator, with little systemic absorption. It is indicated as maintenance therapy (see Table 81-10), but adherence is problematic because it requires frequent dosing (4 times daily). Hence, ipratropium is often combined with albuterol as a nebulized solution, particularly in long-term care and hospital settings. Combined ipratropium and albuterol is also available as a soft-mist inhaler (SMI). Common side effects of short-acting anticholinergics are listed in Table 81-10, with concerns especially raised in COPD patients with cardiovascular disease (especially bradyarrhythmias), narrow-angle glaucoma, benign prostatic hyperplasia with urinary retention, and severe constipation.
Long-Acting Bronchodilators Long-acting β2-selective adrenergic agonists (LABA
—formoterol, arformoterol, indacaterol, olodaterol, vilanterol) and long- acting anticholinergics (including the long-acting muscarinic antagonists [LAMA]—aclidinium, tiotropium, umeclidinium, glycopyrrolate, and revefenacin) are considered maintenance therapy (see Table 81-10) for persons whose symptoms are inadequately controlled with one or both classes of short-acting bronchodilator(s), and for those with frequent exacerbations or have moderate-to-severe airflow obstruction. Subsequent to initiating a long-acting bronchodilator, there is a continued role for a short- acting β2-agonist (SABA) in the management of acute symptoms (rescue
therapy). LAMA are typically used as the first-line long-acting bronchodilators due to their greater reduction in exacerbation risk as
compared with LABA. Ultimately, however, the choice of initial class of LABD depends on patients’ comorbidities, preferences (vis-à-vis symptom relief and side effects), and insurance coverage for the medication. For persons whose symptoms are inadequately controlled and/or who have disease exacerbations despite one class of LABD, the other class is typically added if safe to do so. Dual class BD treatment improves lung function, symptom control, and reduces exacerbation risk to a greater extent than that achieved with single BD agents.
Long-acting BD are usually delivered by handheld inhalers. Long-acting nebulized BD (eg, twice-daily β2-selective adrenergic agonist [arformoterol, formoterol]) and/or long-acting anticholinergics (glycopyrrolate,
revefenacin) are an option for persons who are unable to use or gain benefit
from handheld inhaler devices (see treatment barriers, discussed further below). Of note, an increased risk of cardiovascular events was demonstrated in a large cohort study within 30 days of new initiation of LABA and/or LAMA therapy; hence, patients should be monitored especially carefully during this period.
Inhaled Corticosteroids Oral corticosteroids are not indicated in stable COPD due to multiple side effects. Long-term treatments with inhaled corticosteroids (ICS) have an important role in patients with frequent exacerbations not adequately managed by a long-acting β2-selective adrenergic agonist
(LABA) and/or anticholinergic (LAMA), who have been hospitalized for COPD exacerbation or who have concomitant asthma. Some people with COPD (nearly 40% in the ECLIPSE Study Cohort) have peripheral blood eosinophilia in the absence of atopy, allergy or asthma. Recent evidence suggests ICS combined with LABA are also an important component of treatment, reducing exacerbations and hospitalizations for those with absolute eosinophil count greater than or equal to 300 cells/mL and a high symptom burden and frequent exacerbations (GOLD Group D). However, in contrast to asthma, the use of ICS as single therapy is not advised in stable COPD, since in general, airway inflammation in COPD is less steroid- responsive than in asthma, and where used in COPD, it is more effective when combined with inhaled LABA. For those with COPD who experience exacerbations despite dual class LABD, addition of ICS improves respiratory symptoms, lung function, and health-related quality of life, (QOL) as well as reduces exacerbations. Recent data from two large randomized
controlled trials (RCTs) also show a survival benefit of “triple therapy” (LABA/LAMA/ICS) among individuals with frequent exacerbations (eg, > 2 per year).
The adverse effects of ICS are dose- and duration-dependent, and influenced by patient-specific factors, specifically inhaler technique. While ICS doses are lower than oral corticosteroids, the duration of therapy is often lifelong, leading to longer exposures. Significant localized deposition of ICSs occurs in the oropharynx (60%–90% of dose) and may lead to dysphonia, oropharyngeal candidiasis, and allergic contact dermatitis of the mouth, nostrils, and eyes. Poor inhaler technique and suboptimal mouth care may increase the risk for these localized adverse drug effects. For example, older adults who forget to rinse, gargle, and spit after using their ICS are more likely to develop dysphonia and/or oral and/or esophageal candidiasis. Once swallowed, the remaining ICS is absorbed from the gastrointestinal tract, undergoes first-pass metabolism, and the majority of the dose is inactivated. What is not inactivated is available systemically and contributes to potential adverse effects. Predicting the degree of systemic glucocorticoid activity from ICS is difficult. Factors that influence risk from systemic glucocorticoid activity include advanced age, sex, cigarette smoking, activity level, and dietary calcium and vitamin D intake.
The systemic adverse effects of ICS are well described. Several have unique implications in older COPD patients. In particular, ICS are associated with decreased bone mineral density and posterior subcapsular cataract development, both of which can increase the risk of injurious falls in older adults. While studies examining the impact of ICSs on the risk of osteoporosis are mixed, doses above 1000 mcg/day of fluticasone or equivalent appear to be associated with accelerated loss of bone mineral density, and increased risk of fracture. Hence, osteoporosis prevention therapy should be considered for any patient receiving long-term ICS. Though subcapsular cataracts are clearly linked to systemic glucocorticoids, causality with ICS is less clear (with advancing illness, patients are more likely to receive both oral and ICS). In addition to increasing the risk of falls, visual impairment due to cataracts can also contribute to medication errors and difficulty with proper inhaler technique. Capillary fragility, with easy bruising of the skin and risk of skin tears, is also associated with long-term ICS use.
Importantly, meta-analyses of RCT and a large case-control study have observed an increase (1.2–1.6-fold) in the risk of pneumonia associated with ICS; although prior work showed that the use of ICS did not increase mortality in veterans hospitalized with COPD and pneumonia, it remains prudent to consider discontinuation of ICS among COPD patients who develop pneumonia. The pneumonia risk is greatest among those with older age, frailty and low BMI, high ICS dose and blood eosinophils < 100 cells/mL. Long-term ICS use in COPD has also been associated with increased incidence of atypical mycobacterial infection (such as mycobacterium avium intracellulare), which can lead to symptom exacerbation (including cough, sputum production), worsening airflow obstruction (due to airway inflammation and mucus plugging), bronchiectasis, pulmonary infiltrates, and worse gas exchange. Clinicians should be aware of this association and obtain sputum samples for AFB stain and culture where needed. Lastly, chronic ICS use is associated with tracheobronchomalacia (TBM), with dynamic central airway obstruction.
This can mimic symptoms and spirometric features of COPD, but does not respond to typical bronchodilator therapy; consideration of noninvasive ventilation may be needed. Otherwise, whether a specific ICS or delivery device poses greater risk has been inconclusive. Withdrawal of ICS can be considered for persons with blood eosinophils < 300 cells/mL (and especially if < 100 cells/mL, or those who demonstrate prolonged periods of time (eg, > 1 year) free of exacerbations.
Combination Inhalers Table 81-10 presents inhalers that combine (1) a β2- selective adrenergic agonist and an anticholinergic, (2) a β2-selective adrenergic agonist and ICS, or (3) a β2-selective adrenergic agonist,
anticholinergic, and ICS. Combination therapy provides additional benefit on lung function, symptoms, exercise capacity, and exacerbation risk as compared with its components, particularly in patients who have severe airflow obstruction and/or who have frequent exacerbations. Triple LABA/LAMA/ICS further improves lung function, reduces moderate to severe exacerbations, improves QOL, and as noted above, may confer a mortality benefit for those with frequent exacerbations despite taking dual class LABA/LAMA therapy.
Roflumilast Roflumilast is a once-daily oral phosphodiesterase-4 inhibitor, serving as maintenance therapy to prevent COPD exacerbations in patients
with severe airflow obstruction (FEV1 < 50% of predicted), chronic bronchitis, and a history of exacerbations (despite standard inhaled bronchodilator therapy). Common side effects include nausea, diarrhea, and
weight loss, occurring at higher rates in those aged greater than or equal to
65. It is not effective in reducing exacerbations amongst those with emphysema-predominant COPD.
Macrolides Azithromycin (250 mg once daily or three times weekly) is a macrolide antibiotic that has been shown to decrease the frequency of COPD exacerbations when added to usual care among former tobacco smokers. This effect is accompanied by an improved QOL, but also an increased incidence of macrolide-resistant organisms and hearing decrements. Maintenance azithromycin therapy may be considered for frequent exacerbators whose symptom escalation is thought primarily due to bouts of worsened airflow obstruction and airway inflammation (rather than other processes such as unrecognized hypoxemia or “dyspnea crisis”), who are maximized on triple- inhaler therapy and do not have a hearing impairment, resting tachycardia or other arrythmia, apparent risk of QTc prolongation, or clinical/radiographic evidence of nontuberculous mycobacterial infection. Since macrolide resistance can complicate treatment for nontuberculous mycobacteria (when needed), careful assessment for possible nontuberculous mycobacteria infection is warranted prior to considering regular use of macrolide therapy for routine COPD exacerbation prevention.
Mucolytics Based on low efficacy and increased risk of adverse effects, there is little evidence to support mucolytics (eg, N-acetylcysteine) as routine care for stable COPD. Hence, mucolytic use is limited to management of tenacious sputum production that is refractory to standard therapy (smoking cessation, inhaled BD and corticosteroid, antibiotics, and/or roflumilast).
Although not proven to improve lung function or reduce exacerbations, some patients report symptom relief and increased ease of sputum clearance with regular or intermittent use of guaifenesin. Persons with severe airflow obstruction with chronic mucus production and those with bronchiectasis may benefit from a regular “airway clearance” routine, with nebulized albuterol (and/or hypertonic saline), followed by use of a vibratory PEP device. This can improve symptoms (dyspnea, cough, sputum production, and wheezing) and airflow and reduce frequency and/or severity of infectious exacerbations.
Biologic Monoclonal Antibody Therapies Owing to their success in the management of severe eosinophilic asthma, recent clinical trials have investigated the role of monoclonal antibody therapies (eg., omalizumab, benralizumab, mepolizumab, etc) in the treatment of COPD. A modest reduction in exacerbations has been found among individuals with COPD with elevated peripheral blood eosinophil counts, and history of frequent exacerbations despite triple LABA/LAMA/ICS treatment. Results have varied across trials, and further work is needed to determine which patients may be best suited to receive these therapies.
Theophylline Theophylline is a nonselective phosphodiesterase inhibitor, available in multiple oral formulations. While it has small BD effect, affords symptom relief for some persons, and was formerly used commonly in the management of stable COPD, it is now usually NOT recommended for routine management of COPD or COPD exacerbations, given the narrow therapeutic dose range, lack of efficacy, and high likelihood of side effects, which may include tremor, tachyarrhythmia, nausea, vomiting, jitteriness, and/or seizures. If used as a third or fourth line treatment, theophylline requires therapeutic drug monitoring with a goal serum level in older adults of 8 to 12 mcg/mL. Levels can increase significantly in the presence of cirrhosis, heart failure, hypothyroidism, and cimetidine. In older adults patients treated with theophylline, coadministration of ciprofloxacin (CYP1A2 inhibitor) is associated with a twofold increased risk of hospitalization. Conversely, theophylline levels decrease when hepatic enzymatic activity is increased as in hyperthyroidism, smoking, use of phenytoin, phenobarbital, and carbamazepine, or consumption of cruciferous vegetables and charbroiled meats.
Treatme nt barriers Medication nonadherence is reported in 50% to 60% of COPD patients. Nonadherence is associated with more frequent exacerbations, worsening of disease, and difficulty in reconciling medications. The causes of nonadherence include cognitive and physical impairments, cost, medication complexity, lack of perceived benefit, and adverse drug effects.
Cognitive and physical impairments leading to improper inhaler technique are considered further below.
Cognitive Impairment A Mini-Mental Status Examination (MMSE) score less than 24, inability to perform intersecting pentagons, and/or an EXIT-25
cognitive test score greater than or equal to 15 is associated with impaired ability to learn inhaler techniques. Requiring patients to “teach-back” inhaler technique may further identify those with difficulty performing multistep commands. Cognitive impairment often leads to uncoordinated “press and breathe” use of inhalers, as well as nonadherence. Consequently, nebulizers often replace handheld inhalers in cognitively impaired patients. Recently, electronic devices attached to inhalers have been introduced; these may help promote adherence by providing sound and/or light-based prompts.
Physical Impairment Many handheld inhalers require both fine-motor skills and adequate grip strength to actuate. This is especially problematic for certain pMDIs, dry powder inhalers (DPIs), and newer SMIs. Decreased hand strength, for example, is a predictor for incorrect use of a pMDI. The presence of arthritis, joint pain, or neuromuscular diseases may prevent proper use of handheld inhalers, by reducing grip strength or impairing dexterity. Impaired manual dexterity prevents patients from properly priming or preparing the inhaler/nebulizer, particularly if devices require removing capsules from foil packaging, mixing diluents with active agents, and puncturing capsules prior to inhalation.
Visual impairment can make distinguishing between products difficult (rescue inhaler versus maintenance inhaler) and negate the adherence feedback mechanisms built into certain devices (dosage counters).
Individuals with impaired hearing may have difficulty understanding inhaler training, and several inhaler/valved-chamber devices (VCD) use auditory cues for proper inhalation technique, to which the hearing impaired older adult is unable to respond.
DPIs require a sufficient PIF of > 45 L/min to disaggregate the dry powder from the container. Reduced respiratory muscle strength due to aging and progressive COPD (severe airflow obstruction) may explain the variability in DPI drug–lung deposition in older adults. While DPIs result in higher drug–lung deposition (10%–40%) when compared to pMDIs (10%– 20%), the increased PIF required to disaggregate the product also results in up to 80% oropharyngeal deposition. The use of pMDIs may be also adversely affected by a low PIF, since these require a slow, deep inhalation, preferably through a spacer device, to allow for maximal drug–lung deposition. In contrast, SMI require a gentle breathing maneuver, thus offering an advantage over DPIs and pMDIs for patients with reduced PIF or who struggle with “press and breathe” coordination. When performed
correctly, SMIs can achieve 40% to 60% drug–lung deposition. Importantly, patients/caregivers should be reminded that inhalation technique varies with DPI, pMDI, and SMI. Poor inspiratory effort and/or coordination can result in suboptimal medication delivery to the lungs, and in turn may result in suboptimal treatment effects.
When cognitive or functional impairments limit the use of handheld inhalers, a nebulizer is the easiest delivery device to use. Moreover, patients who have severe dyspnea, severe airflow obstruction, or frequent exacerbations may also find nebulizers more effective. Lastly, as discussed below, nebulizer therapy may be a cost-effective alternative in patients with high Medicare Part D health care costs. Unfortunately, nebulizers have disadvantages, requiring a power source, are not easily portable, must be carefully maintained/cleaned, and generally take longer to administer.
Cost Payers often seek lower cost therapies, potentially compromising efficacy, complexity, and patient adherence, or they require prior authorization for certain products (COPD prescription fill rates decline when patients perceive barriers to obtaining medications). In addition, because older adult patients regularly experience polypharmacy, they may have significant out-of-pocket expenses due to a coverage gap (“the donut hole”) in their Medicare Part D annual provider and prescription expenses. This increased cost impacts handheld inhalers, as these are covered under Medicare Part D. However, nebulizers, compressors, and medications that are available as inhalation solutions are covered under Medicare Part B. Hence, any premiums, deductibles, and copayments for these items do not count toward Medicare Part D costs, potentially helping older patients avoid gaps in coverage.
Nonpharmacologic Therapies
Vaccination Annual influenza vaccination is a well-established safe and efficacious means of reducing complications related to the flu, including COPD exacerbations. Older adult patients can elect to receive either the regular dose influenza vaccine or newer high-dose influenza vaccine designed for persons aged 65 and older. Evidence supports recommending older adults receive the high-dose quadrivalent vaccine. This vaccine is associated with a higher immune response in older adults and was 24.2% more effective in preventing flu when compared to the standard-dose vaccine. The Centers for Disease Control and Prevention (CDC) and its
Advisory Committee on Immunization Practices have not expressed a preference for either vaccine.
Pneumococcal vaccination has been shown to reduce COPD exacerbations, community-acquired pneumonia, and bacteremia. Patients should therefore be vaccinated according to CDC guidelines, which recommend use of polysaccharide pneumococcal vaccine (PPV23; pneumovax) for all adults with COPD or who are current smokers. The 13- valent conjugate vaccine (PCV-13) is recommended for people with COPD older than age 65 and those who are immunocompromised, have history of invasive pneumococcal vaccine, or have a history of cochlear implant.
Smoking cessation Abstinence from tobacco smoking is important for all persons with COPD, irrespective of age or disease severity. Smoking cessation, even undertaken in older age, has been associated with better preservation of general daily functioning in older adult years. Smoking status should be addressed at every office visit and during hospitalizations, including number of cigarettes per day, triggers for smoking, time to first cigarette use following morning awakening, details of prior quit attempts, and patient-identified barriers to quitting. Smoking cessation attenuates the rate of decline in lung function, improves the response to inhaled BD therapy, and reduces the risk of COPD exacerbation, cardiovascular disease, cancer, and death.
Several methods are available to assist patients with smoking cessation.
The combination of behavioral counseling and pharmacotherapy is most effective in achieving long-term abstinence. Importantly, available interventions for smoking cessation have comparable efficacy among older adults to those seen for younger persons. Therefore, a nihilistic approach based on advanced patient age should not be taken. The “5 A’s” algorithm (developed by the US Public Health Service)—Ask, Advise, Assess, Assist, and Arrange—is an effective behavioral counseling strategy for use in the clinic setting. Referral to a smoking cessation program should be offered, and information regarding the US nationwide toll-free telephone counseling service (1-800-QUIT-NOW) should be provided. Recently developed mobile phone apps, text messaging, and computer-based interventions are effective for some persons. For patients ready to make a quit attempt, a quit date should be set, an action plan for managing cravings and triggers for smoking should be created, and a plan for follow-up with the health care provider must be arranged. The three pharmacologic treatments with the best
efficacy include nicotine replacement (NRT) (especially the combination of nicotine patch and a faster-acting agent such as gum or lozenges to address urges), varenicline, and bupropion. No single strategy has proven to be better than the other. Patient preference, comorbidities, and previous experience with quit attempts influence the choice of first-line treatment. Nicotine replacement therapy is generally safe, even for persons with underlying cardiovascular disease, but should be avoided in the setting of recent MI or stroke. Bupropion may be the best first-line agent for persons with depression or schizophrenia, but NRT may be preferable for persons with bipolar disorder, since antidepressant agents can trigger manic episodes.
Patients taking bupropion or varenicline must also be monitored for development of neuropsychiatric symptoms, including suicidal ideation. Close partnering with patients’ mental health care providers is important in choosing pharmacotherapy for smoking cessation, especially since some patients may already be taking other antidepressants. Combination therapy can be considered when single class pharmacotherapy has failed. The combination of NRT and varenicline has thus far demonstrated the highest rates of short-term success. Nicotine replacement should be provided as needed during hospitalizations, to prevent nicotine withdrawal symptoms. Other pharmacologic therapies for smoking cessation are the subject of active research.
Long-term oxygen the rapy Requirement for long-term oxygen therapy (LTOT) is a poor prognostic feature of COPD. The indications for LTOT are shown in Table 81-11. A formal oxygen prescription specifying the indications for treatment, conditions for use (rest, exercise, and/or sleep), and type of oxygen system(s) is required. LTOT worn more than 16 hours per day improves survival for patients with severe resting hypoxemia (PaO2 ≤ 60 mm
Hg) with COPD. Among persons with resting hypoxemia, LTOT can also improve exercise capacity, dyspnea, sleep quality, cognition, depression, cardiovascular comorbidity, and QOL, as well as reduce the frequency of hospitalizations. However, a recent RCT did not find clear benefits of supplemental oxygen in regard to lung function, exercise capacity, QOL, time to first hospitalization, or mortality among people with moderate resting or exercise-induced oxygen desaturation alone. It may still be used for those who desaturate less than 88% or PaO2 less than 55 mm Hg during exertion
AND who have improvements in dyspnea and/or exercise capacity with its
use. The role of isolated supplemental O2 during sleep for those without CHF, pulmonary hypertension, OSA, or concurrent lung disease who lack daytime hypoxemia is also unclear.
TABLE 81-11 ■ INDICATIONS FOR LONG-TERM OXYGEN THERAPY
Practical issues and potential risks associated with providing LTOT must be considered, particularly among older adult persons. Patients often perceive a reduction in their mobility and/or QOL based on requirement for supplemental O2. Long oxygen tubing attached to oxygen concentrators in the
home can pose a fall risk. Cigarette smoking, cooking flames, and other heating systems can pose the risk of explosion or fire. Lightweight portable O2 systems maximize the likelihood that the older adult patient will be
physically capable of using their O2 with exertion, but may not be available and/or may not sufficiently maintain an adequate SaO2 (eg, for those with
high liter flow rates). Rollator walkers with handbrakes, seat, and basket assist debilitated patients in carrying their O2 systems and maintaining mobility both in and outside the home. Patient cognition and home layout and
safety should also be assessed. Education about oxygen equipment and safety
should be provided to patients and their caregivers.
Noninvasive positive pressure ventilation Noninvasive positive pressure ventilation (NPPV) is indicated for the management of some patients with COPD. First, OSA is a common comorbidity of COPD; persons with both conditions are considered as having COPD-OSA overlap syndrome. Individuals with COPD-OSA overlap syndrome have worse gas exchange disturbances, increased risk of pulmonary hypertension, and a higher mortality risk than those with either condition alone. Continuous positive airway pressure (CPAP) therapy is indicated for the management of OSA, when present.
Second, NPPV (usually bilevel positive airway pressure [BPAP]) is indicated for persons with acute exacerbations of COPD associated with acute hypercarbia (PaCO2 > 45–50 mm Hg) and respiratory acidosis (who
lack contraindications as noted below). In this setting, NPPV improves gas exchange, improves breathing pattern, and provides support to the respiratory muscles, while allowing time for medical therapy such as BD, antibiotics, and corticosteroids to improve airflow and reduce the work of breathing.
High-quality RCTs have shown that, as compared with medical therapy alone, NPPV for acute hypercarbia related to COPD exacerbations reduces intubation rates, shortens hospital length of stay, reduces the risk of ventilator-associated pneumonia, and improves survival. Addition of NIV to home supplemental O2 among those with persistent hypercarbia (PaCO2 > 53
mm Hg) within 4 weeks of an acute COPD exacerbation with acute on chronic hypercarbic respiratory failure have a longer time to readmission and a reduced risk of death over the subsequent 12 months.
The role of “chronic” NPPV for long-term management of stable patients with COPD and hypercarbia is less certain. Uncontrolled case series suggest that nocturnal NPPV can improve daytime gas exchange and daytime walking activity (postulated to be related to respiratory muscle rest and reduction in hyperinflation), but randomized trials have not consistently shown these effects. Recent RCTs suggest chronic NPPV can reduce dyspnea and improve QOL, and may have very slight benefit with regard to reducing hospitalizations and improving mortality rate for those with stable COPD and
chronic hypercarbia. It has been shown in research settings to facilitate exercise training in pulmonary rehabilitation (PR) for some persons.
However, patient intolerance and poor acceptance of NPPV commonly limit its long-term use. Chronic NPPV is considered for nocturnal use in persons with daytime hypercarbia (PaCO2 ≥ 50 mm Hg), in those with sustained
alveolar hypoventilation for more than 5 continuous minutes during sleep despite supplemental oxygen therapy greater than or equal to 2 L/min, or in those who have episodes of hypercarbic respiratory failure requiring assisted ventilation. In these individuals, chronic NPPV may provide symptomatic relief, particularly if there is coexisting sleep disruption and daytime fatigue. Noninvasive ventilation settings should be adjusted to target reduction in PaCO2.
Contraindications to NPPV include cardiorespiratory arrest, unstable vital signs, inability to tolerate the face mask (discomfort, skin breakdown, or claustrophobia), facial trauma, inability to protect the airway, combative behavior, severely impaired consciousness, need to clear purulent secretions, high aspiration risk, and recent gastric or esophageal surgery. Caution is also urged regarding NPPV in persons with giant bullae or very severe bullous emphysema, given the risk of inducing a pneumothorax with a resultant nonhealing bronchopleural fistula.
Pulmonary rehabilitation People with COPD are less active than healthy age- matched persons. Exertional dyspnea, leg fatigue, anxiety, depression, impaired balance, and fear may all contribute to a low physical activity level. Other contributory factors include skeletal muscle dysfunction (including sarcopenia), which is common in COPD, and the increased dyspnea and gas exchange abnormalities that result from acute exacerbations. Importantly, because it can lead to deconditioning, sarcopenia, and other aspects of muscle dysfunction, decreased endurance, and exercise intolerance, physical inactivity is itself associated with an earlier onset of lactic acidosis at low work rates, yielding a greater ventilatory demand and dyspnea at any given activity level. Moreover, as physical inactivity progresses, there is increased physical disability, social isolation, anxiety, and depression, as well as risk of COPD exacerbation (hospitalization), obesity, diabetes, cardiovascular disease, and death.
PR is “a comprehensive intervention based on a thorough patient assessment followed by patient-tailored therapies that include, but are not
limited to, exercise training, education and behavior change, designed to improve the physical and psychological condition of people with chronic respiratory disease and to promote the long-term adherence to health- enhancing behaviors.” PR reduces dyspnea and leg fatigue, improves exercise tolerance and patient-reported QOL, decreases the risk of hospitalizations and other urgent health care utilization, and reduces anxiety and depression. It can also improve balance, reduce the risk of falls, and improve physical activity levels.
Participation in PR within 90 days of hospitalization for COPD exacerbation is also associated with improved survival. The process of PR includes patient assessment, supervised exercise training and reconditioning, patient education, and outcome assessment. Exercise training typically includes aerobic and strength training of the lower and upper extremities, using a treadmill or hallway walking, cycling, arm ergometry, stair climbing, and light weight lifting. Nordic walking and interval-type exercise, balance training, and Tai chi are also effective. Walking aids such as rollator walkers can assist in improving tolerance of walking in severely disabled persons.
The education component of PR is geared toward helping patients understand and manage their condition (including early recognition of exacerbations), as well as promotion of a healthy and active lifestyle. Emphasis is also typically placed on training patients with pacing, energy conservation, and pursed lips breathing techniques, as well as strategies for management of anxiety and prevention of “dyspnea crisis.” PR also affords the opportunity to counsel and educate patients regarding advance care planning and educate them regarding life support interventions.
All patients who remain symptomatic despite optimized pharmacologic therapy (including those with severely impaired lung function) should be referred for PR. Persons older than age 70, even those with severe lung function impairment and comorbidities, benefit to the same degree as younger persons. Close partnering between patients’ health care providers and PR staff is important to ensure patient safety.
Surgical the rapies Surgical therapies for COPD include lung volume reduction surgery (LVRS) and lung transplantation. LVRS is a procedure wherein areas of lung that are virtually functionless (not contributing to ventilation or perfusion) are removed, thereby reducing hyperinflation and improving elastic recoil and diaphragm positioning and function. LVRS can be accomplished by surgical resection of regions of lung (typically upper lung
zones), or bronchoscopic placement of coils, valves, or other devices that lead to atelectasis of targeted lung zones. When used in carefully selected patients, LVRS can lead improvements in exercise tolerance, oxygenation, lung function, QOL, and survival. LVRS can also serve as a bridge to lung transplantation for some patients. Ideal candidates for surgical LVRS are those with severe emphysema-predominant COPD (heterogeneous distribution, upper lung zone predominant) with FEV1 less than 50%
predicted, lung hyperinflation (eg, RV > 150–200% of predicted), who lack features of chronic bronchitis, are not at extremes of body weight, lack other unstable medical comorbidity, do not require regular treatment with systemic corticosteroids, and who remain symptomatic with low exercise tolerance despite optimized pharmacologic therapy and PR, and have maintained long- term abstinence from tobacco use. Those whose FEV1 or diffusing capacity
(DLCO) is less than 20% of predicted and homogeneous distribution of emphysema have higher mortality following surgical LVRS and should not undergo the procedure. Patients must be selected carefully also for bronchoscopic LVRS, as it is associated with complications including pneumothorax (in up to 30%) and infection. Bronchoscopic LVRS can be considered in those with variable distribution of emphysema (including homogeneous pattern), as long as evaluation demonstrates complete lobar fissures without collateral ventilation to areas targeted for decompression.
Single or double lung transplantation may be considered for selected older adult persons with very severe COPD. Although lung transplantation can improve exercise tolerance, QOL, and survival of some individuals, anticipated survival benefit should exceed the short-term and long-term risks of undergoing the procedure, including surgical and perioperative morbidity and mortality, and potential complications related to long-term immunosuppressive therapy. Since median survival following lung transplantation is 5 to 6 years, and since surgical risk is high among older adult persons with multimorbidity, referral for lung transplantation should be undertaken only when anticipated survival related to the COPD is lower than that related to age and/or comorbidities. Given the variable rate of progression of COPD over time, this is difficult to predict. Current guidelines suggest lung transplantation referral for COPD patients who have FEV1 and diffusing capacity (DLCO) less than 25% of predicted, PaCO2
greater than 50 mm Hg and/or PaO2 less than 60 mm Hg, pulmonary
hypertension and/or cor pulmonale, BODE Index score greater than 5, are worsening clinically despite optimized medical therapies (including PR), had one or more hospitalizations for exacerbation with acute hypercapnia or at least three severe exacerbations in the prior year, and have demonstrated long-term complete abstinence from tobacco or other substance use. They must also lack other contraindications as indicated in the 2014 lung transplantation guidelines, including recent cancer, other unstable medical or
psychological conditions, BMI greater than 35 kg/m2, inadequate social support, inability to adhere to medical therapies, uncorrectable bleeding disorder, active tuberculosis infection, or significant spinal/chest wall distortion. Although older than age 65 is a relative, rather than absolute contraindication to lung transplantation, recent studies have shown that those older than age 70 have an increased risk of 30-day and 1-year mortality and are less likely to return to prior baseline functional status, as compared to younger persons.
ACUTE COPD EXACERBATION
The GOLD Report defines an acute exacerbation of COPD as a change in sputum volume or purulence, and/or an increase in dyspnea beyond usual day-to-day variation that warrants a change in therapy. However, not all exacerbations leading to escalation of symptoms lead to a change in therapy; in particular, patients may underreport symptoms, attributing them to aging or other conditions. The frequency of exacerbations increases with severity of
airflow obstruction, with older persons likely to have more severe disease. It is estimated that the average older person with COPD experiences two to three exacerbations per year, each lasting up to 2 weeks. Each of these events presents a risk for respiratory failure and death, but more commonly results in tangible impacts on symptoms, increased airflow obstruction and related respiratory impairments, disability, and QOL. COPD exacerbations also increase the risk of myocardial infarction, pulmonary emboli, and stroke; are responsible for up to 20% of hospitalizations in individuals 75 years or older; and are a major source of health care costs. Moreover, institutions are now financially penalized for all-cause readmissions following admissions for COPD exacerbations.
Risk Factors
Risk factors for acute exacerbations include advanced age, severe airflow obstruction, chronic bronchitis, comorbid diseases (particularly cardiovascular disease, diabetes mellitus, and gastroesophageal reflux), and prior exacerbations. Those with history of two or more exacerbations in a year, or an exacerbation requiring hospitalization are at particular increased risk of subsequent exacerbation events. In the aging patient, the weakening of thoracic muscles, degenerative changes of the spine, and impairment of mucociliary clearance make secretion clearance and cough less effective.
Older adult patients are therefore less able to manage the increased secretions resulting from even a mild exacerbation. Age-related changes to both adaptive and innate immune systems make responses to infection less robust, and the inflammatory responses to acute illness more discordant.
Protective immunity from prior immunization may be waning or frankly incompetent. The previously discussed changes to overall respiratory function with both age and COPD limit underlying pulmonary reserve. Comorbid conditions may make it more difficult to tolerate the stress of acute illnesses. For all these reasons, the presentations of and outcomes from acute exacerbations in older adults may be more severe than those in middle-aged individuals of similar spirometric severity.
Etiologies
Respiratory infection accounts for approximately 70% of COPD exacerbations. Viruses, typically detected using direct fluorescent antibody (DFA) or polymerase chain reaction (PCR)-based methods, can be found in up to two-thirds of individuals with acute exacerbations. Common viral pathogens, of which rhinoviruses are the most prevalent, are shown in Table 81-12. However, a positive PCR does not definitively indicate causality, as positive results can also be found in asymptomatic, stable individuals with COPD. The exceptions to this are influenza and SARS-CoV-2, which should always be considered causative pathogens and treated accordingly.
TABLE 81-12 ■ COMMON INFECTIOUS PATHOGENS IN ACUTE EXACERBATIONS OF COPD
Bacteria are responsible for 40% to 60% of COPD exacerbations, and may coinfect with viral pathogens. As shown in Table 81-12, Haemophilus influenzae, Moraxella catarrhalis, and Streptococcus pneumoniae are the most common, but Pseudomonas aeruginosa and enteric gram-negative bacteria and Staphylococcus aureus (including MRSA) are frequently isolated in more severe COPD and in residents of skilled nursing facilities and/1260or those who have received frequent courses of antibiotics; the contribution of atypical pathogens is more difficult to estimate accurately given the infrequency with which titers or accurate cultures are obtained.
Acquisition of new strains of common pathogens plays a much more
important role in acute exacerbations of COPD than changes in the load of colonizing bacteria. This is supported by observations that new bacterial strains are associated with more robust humoral immunity and inflammatory responses, and clearance of these strains correlates with resolution of acute illness. Other common causes of COPD exacerbation include exposure to environmental irritants (including pollutants, environmental smoke, strong odors such as floor cleansers or perfumes), GERD, allergens, and changes in ambient air temperature and/or humidity. Chronic aspiration is another consideration in older persons. Hypogammaglobulinemia is another recently recognized risk for COPD exacerbation that is also associated with increased risk of hospitalization and mortality.
Diagnostic Considerations
COPD exacerbation is a clinical diagnosis. COPD exacerbations are typically associated with worsened airflow obstruction over and above that present at the patient’s baseline. The worsened airflow obstruction may relate to increased airway mucus, airway wall inflammation, and/or bronchoconstriction. Other triggers for worsening symptoms in people with COPD include episodes of hypoxemia (with associated increase in ventilatory demand), unrecognized ventilatory insufficiency (with acute and/or chronic hypercarbia with need for NPPV support), and “dyspnea crises” related to episodes of dynamic hyperinflation above baseline with worsened mechanical disadvantage of the respiratory muscles and reduced inspiratory capacity. The latter typically result from exertion or other causes of faster respiratory rate (with less time for exhalation and lung emptying), such as anxiety, pain, or other causes of tachypnea. Importantly, several other conditions including pneumonia, pulmonary embolus, CHF (systolic and/or diastolic), pulmonary hypertension/cor pulmonale, sepsis, anemia, variable dynamic upper airway obstruction (intrathoracic—such as TBM; or extrathoracic—such as vocal cord dysfunction) can lead to escalation of symptoms and/or worsened gas exchange among individuals with COPD, but are not COPD exacerbations per se.
These conditions are not, therefore, expected to respond to typical treatment for COPD exacerbation (considered further below). Moreover, treatments that are crucial to COPD exacerbations may exacerbate comorbidities or cause potentially dangerous drug interactions. These comorbidities may additionally complicate or prolong recovery from COPD exacerbation. Individuals with COPD are particularly vulnerable to cardiovascular morbidity and mortality both during exacerbations and in the recovery period. Rigorous attention to detection and individualized treatment of these conditions is needed among persons in whom COPD and COPD exacerbation are suspected.
For those with true COPD exacerbation, in many cases, sputum culture does not differentiate between colonizing flora and true infectious pathogen, and underestimates the most common bacterial culprits; therefore, sputum culture is not necessary in routine COPD exacerbations. However, sputum culture may be helpful in detecting worrisome pathogens or resistant strains, if an individual has failed empiric outpatient treatment strategies or has risk factors for Pseudomonas aeruginosa (hospitalization in the past 3 months, ≥ 4 courses of antibiotics in the past year, or severe airflow obstruction). DFA,
with or without PCR, is important during influenza season, as specific antiviral therapy may be initiated.
Treatment of COPD Exacerbations
Most exacerbations can be treated as an outpatient. Patients with more severe airflow obstruction, significant comorbidities, inadequate home support, new/worsening gas exchange abnormalities, prior treatment failure and/or diagnostic uncertainty should be admitted to the hospital for further evaluation (labs, chest x-ray, ECG, arterial blood gas, and other testing as indicated); however, depending on home-based resources, a few such patients may still be candidates for intensive home care. Those with impending respiratory failure are admitted to the intensive care unit (ICU).
Escalation of inhaled medications Escalation of BD therapy is required for all COPD exacerbations and may be sufficient to treat mild cases. The preferred BD is a short-acting inhaled β2-agonist with or without a short-acting
anticholinergic. Nebulized delivery may be more effective and comfortable than metered dose inhalers, particularly for patients with significant dyspnea, cognitive impairment, or severe airflow obstruction. Usual maintenance therapies, such as long-acting BD and inhaled corticosteroids, should be continued during mild exacerbations. For those with severe exacerbations requiring frequent regular dosing with nebulized BD, it may be advisable to withhold LABD temporarily to prevent risk of arrythmia, tremulousness and other adverse effects. As recovery ensues, the patient is transitioned back to a maintenance regimen, but a step-up in inhaled therapy may be warranted to reduce the risk of subsequent exacerbations. For example, a second class of long-acting BD may be added for those previously receiving single class maintenance BD, and addition of ICS may be considered for those who exacerbate despite taking dual class maintenance BD (see algorithm in GOLD Report in Further Reading). If clinical improvement is thereafter achieved, attention is given to streamlining the number of inhaled medication devices to improve self-efficacy and compliance.
Antibiotics General antibiotic recommendations for COPD exacerbations are presented in Table 81-13. Antibiotics used for treatment of exacerbations associated with increased volume and/or purulence of sputum are associated with reduced treatment failure and increased time between exacerbations.
Procalcitonin, a peptide precursor that increases in the serum in response to
bacterial toxins, has been advocated as a potential biomarker to guide antibiotic use. One Cochrane meta-analysis demonstrated that procalcitonin guidance could reduce antibiotic exposure without increase in treatment failure or mortality. However, procalcitonin is not currently recommended to guide antibiotic use in COPD exacerbations, since the level may not be elevated in persons with bronchitis (rather than pneumonia), and its use to guide antibiotics among patients requiring ICU care for acute exacerbation of COPD demonstrated an increase in adverse outcomes. In contrast, recent evidence suggests that serum C-reactive protein (CRP) levels may have utility in guiding which individual will benefit from antibiotic therapy, and for whom antibiotics can be avoided.
TABLE 81-13 ■ SUGGESTED ANTIBIOTIC REGIMENS FOR ACUTE EXACERBATION OF COPD
Choice of antibiotic is driven by a number of factors (history, tolerance, disease severity, comorbidities, medication interactions, prior treatment failure, local patterns of resistance), and complicated by the fact that sputum culture data is unreliable in guiding decisions. Older adult residents of
skilled nursing facilities, or those who have spent time in acute-care or subacute-rehab facilities within 90 days of exacerbation, and persons with severe airflow obstruction are more likely to be colonized with resistant organisms (methicillin-resistant Staphylococcus aureus, Pseudomonas
aeruginosa, and multidrug-resistant gram-negative organisms). These latter risk factors should guide choice of empiric therapy. The duration of antibiotic therapy, in the absence of complicating factors such as pneumonia or bronchiectasis, is typically 3 to 7 days. Those with bronchiectasis may require longer treatment (10–14 days).
Corticoste roids The acute use of systemic corticosteroid for treatment of moderate to severe COPD exacerbation is associated with improved symptoms, faster improvement in lung function, decreased treatment failure, and shorter duration of hospitalization. However, systemic steroids can cause side effects, particularly in older persons with multimorbidity. Common side effects include hyperglycemia, delirium, fluid retention, myopathy, and/or thrush. In addition, because older COPD patients often have atrial fibrillation requiring anticoagulation, and because systemic corticosteroids can cause skin thinning and purpura, this combination of medications may also lead to greater bruising and bleeding. Frequent use of systemic corticosteroids can lead to acquired hypogammaglobulinemia and worsen immune suppression.
The adverse effects of systemic corticosteroid are dose- and duration- dependent. For example, delirium is more likely to occur with doses greater than 60 mg of prednisone per day. As such, dose and duration of therapy should be as conservative as possible. The optimal dose and duration of corticosteroids is unknown; however, a study of individuals hospitalized with COPD exacerbations (most of whom had severe disease) demonstrated that a 5-day course of oral prednisone (40 mg/day) was noninferior to a more conventional 7- to 14-day steroid taper in terms of event resolution and time to next exacerbation. In fact, those on the shorter course had a reduced length of stay, though reasons for this were unclear.
Supplemental oxygen: Supplemental oxygen should be provided as needed to achieve SaO2 > 88% at rest, during exertion, and sleep. Those with history of hypercarbia should be monitored closely for any worsening
CO2 retention while receiving oxygen therapy. High flow nasal cannula O2
can reduce work of breathing post-extubation among those recovering from acute hypercarbic respiratory failure requiring mechanical ventilation.
Invasive and noninvasive ventilation Many patients admitted to the hospital with impending respiratory failure in the setting of COPD exacerbation can be managed effectively with noninvasive ventilation, though less so in the case of concomitant pneumonia. Intubation and mechanical ventilation may be required in the case of severe respiratory failure, if consistent with an individual’s goals of care.
Discharge planning Discharge planning should consider referral for PR and/or tobacco cessation counseling, the need for home oxygen, and a recalibration of maintenance therapy (see Management section) above. Because new treatment regimens increase the likelihood of medication nonadherence or errors, this may be addressed by a formal medication reconciliation and pharmacist-based review. Patient and caregiver education regarding the discharge treatment plan is likewise essential, as are interventions to support safe and prompt transition back to routine care by the primary care provider or specialist. Furthermore, advance care planning should be initiated or reviewed in the wake of hospitalization, as history of a severe exacerbation increases the risk of future events.
COVID-19 (SARS-CoV-2) and COPD
The emergence of COVID-19 posed new, unprecedented challenges for people with COPD. Prior to the widespread availability of vaccines, fear of contracting COVID-19 led most individuals to remain homebound, which led to further reduction in daily physical activity levels, worsened functional disability, increased social isolation, fear, anxiety, and depression. Most PR programs had to close at least temporarily due to the pandemic. Ironically, social distancing mandates, use of masks, avoidance of crowds, and home isolation as well as increased adherence to prescribed medical therapies (eg, related to anxiety and fear) contributed to an overall reduction in acute COPD exacerbation events and community-based management of many exacerbations that did occur, for many with COPD. While to date, there is no clear evidence that individuals with COPD are more susceptible to contracting SARS-CoV-2, worse clinical outcomes have been noted among those with COPD who became infected, including increased risk of hospitalization, ICU admission, respiratory failure, and possibly also death. Older age and cardiovascular morbidity further increase the risk of poor outcomes. Evidence to date suggests those with COPD who contract COVID- 19 should continue their usual COPD treatment, that they should be
considered for standard COVID-19 therapies (eg, monoclonal antibody, remdesivir, dexamethasone, etc) as appropriate, and close attention should be paid to their oxygen levels and work of breathing. SARS-CoV-2 mRNA vaccines appear to be safe and effective in older adults; however, limited data exist among older persons with multiple comorbidities and frailty.
PALLIATIVE CARE
Palliative care is an important and beneficial component of the overall integrated care of people with COPD. Average life expectancy for COPD from time of diagnosis is shorter compared with other chronic conditions such as heart disease or stroke (14 years for COPD compared with 21 years for heart disease). The health trajectory for aging persons with COPD is variable. While for many, disease progression is gradual, some have a more rapid and progressive decline in lung function resulting in increased disability, intermittent exacerbations requiring hospitalizations, and death.
Respiratory symptoms of dyspnea, cough, and sputum production are frequently reported by patients with advanced COPD, affecting up to 98% of patients (Table 81-14). Dyspnea may be more pronounced in older persons with COPD due to coexisting respiratory muscle weakness and comorbidities including cardiovascular disease, sarcopenia, and generalized frailty. Chest tightness and wheezing are also common. Older persons with advanced COPD may also suffer from nonrespiratory symptoms, including fatigue and weakness, likely secondary to shortness of breath and progression of underlying disease and comorbidities including frailty. Pain is also present in 20% to 70% of persons with advanced COPD. Finally, psychological and spiritual distress such as anxiety, depression, and spiritual worries are common in COPD. These chronic physical, psychological, and spiritual symptoms typically persist over many years, and contribute to a poor QOL. Moreover, COPD has impact and poses demands on family members and caregivers. The needs of these persons are often not assessed or addressed in the context of patients’ routine health care visits.
TABLE 81-14 ■ SYMPTOM BURDEN AND MANAGEMENT IN ADVANCED COPD
Despite growing evidence of the substantial palliative care needs of patients with COPD (particularly those with advanced disease), optimal timing for referral to palliative care or hospice is difficult to predict, given the chronic nature and variable progression of the disease. Very severe airway obstruction (FEV1 < 30% predicted or FEV1 Z-score < –2.55) can
help identify patients at risk for uncontrolled symptoms from advanced COPD. However, FEV1 alone is not sufficient to identify patients who would
benefit from palliative care. Gas exchange impairments (hypoxemia or hypercarbia) and weight loss also suggest advanced COPD. Health care utilization, including frequent emergency department visits, hospitalizations, or ICU admissions, may be a marker of those with COPD who would benefit from palliative care involvement. Patients who face declining functional status with difficulty completing activities of daily living, or those with impaired mobility due to progressive dyspnea, have a considerably higher mortality rate than those with normal functional status.
Given the complexities of prognostication, some palliative care specialists suggest asking, “Would you be surprised if this patient were to die in the next 12 months?” If the answer is “no,” this statement may help identify patients in need of palliative care. However, as with all serious life-limiting conditions, palliative treatment and support can be beneficial when provided longitudinally to older patients with COPD. In addition, since the definition of palliative care has evolved to include patients and families facing serious and/or life-threatening diseases, as well as having physical, spiritual, or psychosocial sources of suffering, the population who might benefit from a palliative care team’s involvement has expanded significantly. Hence, rather than focusing on prognosis or end of life as the key criterion for referral to palliative care, medical providers could identify which COPD patients would benefit from assistance managing severe, refractory, disabling symptoms, and/or those who need an “extra layer of support” by an interdisciplinary team consisting of doctors, nurses, social workers, respiratory therapists, care managers, and chaplains.
The goal of palliative treatment for patients with advanced COPD is to reduce symptoms, maintain independence and dignity, and improve overall QOL. To this end, in addition to optimized pharmacotherapy, supplemental oxygen should be provided to older persons with severe COPD who are chronically hypoxemic. Of note, among patients with COPD, the level of measured peripheral oxygen saturation often does not correlate with the level of perceived breathlessness. Education regarding the causes of dyspnea and use of pursed lips breathing to minimize dynamic hyperinflation are important. For patients with more advanced disease and respiratory insufficiency resulting in gas exchange abnormalities and/or chronic respiratory failure, NPPV should be considered for nighttime support and as needed for uncontrolled daytime symptoms (dyspnea). NPPV is also a key tool to consider for patients presenting with acute respiratory failure,
including those who have declined endotracheal ventilation. Palliative care and hospice teams recognize the value of NPPV in the end-stage COPD population, and many programs are able to provide this treatment, although its availability in these settings is highly variable.
Inhaled BD, inhaled and systemic corticosteroids (where appropriate), and other adjuvant therapies that relieve symptoms should be continued during palliative treatment for patients with advanced COPD (see Table 81- 14). However, it is important to consider deprescribing medications for COPD patients whose care is shifting toward palliation and comfort measures only. In addition, smoking cessation counseling and treatments should be offered, even during palliative-only stages of COPD treatment.
Smoking cessation at any stage of disease can reduce exacerbation risk and improve QOL. Nicotine replacement and non-nicotine-based therapies can usually be safely prescribed in older smokers with COPD. Screening for and treatment of symptoms of anxiety, depression, and pain are additional components of optimal care. In addition to pharmacologic therapies, clinicians should also offer nonpharmacologic interventions, such as cognitive behavioral therapy, written action plans, and advanced relaxation techniques (eg, guided imagery training). PR can also improve symptom management for patients with COPD through training in breathing techniques and improving exercise capacity. Finally, simple environmental changes may help decrease the sensation of dyspnea, including use of a fan or opening windows and doors to increase air movement, maintaining a relativity cool temperature, and providing humidified air. Education and support for caregivers are essential.
For patients with uncontrolled symptoms despite maximal supportive and medical therapy, opioids may be considered for symptom relief. The American Thoracic Society guidelines support their use in refractory dyspnea at any stage of illness, chronic and end of life. Opioids are thought to be effective in relieving dyspnea by several mechanisms including decreasing respiratory drive through a direct effect on respiratory centers in the brain stem, by altering the perception of dyspnea and anxiety (central effect), and by direct action on peripheral mu-receptors in the lung (bronchioles). Early studies of oral, sustained-release morphine in opioid- naive patients with advanced COPD (doses 10–30 mg orally) and a recent Cochrane Review suggest significant relief of dyspnea. Subsequent studies have not reported data to suggest the efficacy of one opioid (eg, morphine)
over another opioid (eg, oxycodone). As COPD progresses and patients near the end of life, swallowing may be impaired or the patient may experience episodes of worsening dyspnea. In this setting, intravenous or subcutaneous opioids may be indicated. Studies of nebulized morphine have failed to prove efficacy compared to placebo, although case studies report improvement in dyspnea. Safety in the end-stage COPD population is clearly a concern given the risk of CO2 retention in this population. While some data
suggest that carefully monitored patients who are given low-dose opioids in this setting are at low risk for opioid-induced respiratory failure, a recent analysis of the US Medicare database demonstrated increased risk of hospitalization for respiratory conditions among those taking opioids, and in a Canadian study, new incident use of opioids was associated with increased respiratory-related and all-cause mortality. If used, opioids should be titrated carefully to patient reported dyspnea. Importantly, also, although benzodiazepines may afford relief of anxiety, they pose risk of dependency and do not consistently relieve dyspnea. The risk of respiratory failure also increases with concurrent use of benzodiazepines and opiates and requires close monitoring, consistent with the overall goals of care. Constipation, cognitive impairment, somnolence, and delirium are the most frequently experienced adverse drug events. The latter can be anticipated and managed prophylactically with close monitoring, careful use of low-dose antipsychotics, opioid dosage adjustment in response to adverse cognitive effects, and the prescription of a laxative regimen to prevent constipation in any patient who requires daily opioids.
Finally, data suggest that patients and families facing serious illnesses such as COPD value clear and honest communication about their disease process. The majority of patients want all information, both positive and negative, but also value statements of “hope” integrated into the shared information. Patients and families also desire the opportunity to discuss options for care, especially near the end of life. Several communication guidelines are available to assist medical providers in discussing distressing news, such as the results of medical tests that reveal progression of their disease. One widely shared framework is called SPIKES, created by Dr.
Robert Buckman. In summary, these steps include:
S Setting: Ensuring that the setting is appropriate to provide patient comfort and privacy and that adequate support is provided as
needed/desired by the patient.
P Perception: Assessing the patient’s and/or family’s understanding of illness.
I Invitation: Obtaining the patient’s invitation or permission to discuss distressing news (and respecting their choice not to discuss such news).
K Knowledge: Giving knowledge and information to patient and family in a clear and concise manner, avoiding complex technical/medical terminology.
E Empathy: Addressing patient’s emotions with empathic responses.
S Strategy and summary: Creating a care plan for the next steps in treatment and follow-up, as well as providing contact information for future concerns.
Once information is shared, the medical provider can begin to discuss goals of care by asking patients what is most important to them. For example, a clinician may say to a patient with COPD: “We discussed that time may be short given this is your third admission to the ICU in 2 months and, each time you come to the hospital, your ability to care for yourself deteriorates significantly. Given this reality, what is most important to you? What do you hope for? What has been left undone?” Once the patient’s and/or family’s goals are elucidated, clinicians can help guide the medical decision-making toward treatments that will likely achieve patient-centered goals, while refusing treatments that are unlikely to meet such stated goals. Ideally, these discussions occur in the outpatient setting, with medical providers that the patient knows well and trusts, when the patient is not in a crisis, and when time is not limited. Advance directive completion and identification of proxy decision maker are two key outcomes of these discussions, particularly in the ambulatory setting. While patient-centered palliative care benefits patients with COPD and their families across a spectrum of needs, access to it currently remains limited. Health care professionals and societies should advocate for more resources for palliative care services for patients with advanced lung disease, even in the absence of cancer.
CONCLUSION
COPD presents a major public health challenge, projected worldwide to be a leading cause of disability (fifth) and death (third) by 2030. This is concurrent with a demographic shift toward an aging population, further amplifying the public health challenge. In particular, older persons are at high risk of developing COPD, given the age-related decline in respiratory physiology and the cumulative effect of frequent exposures to tobacco smoke, respiratory infections, air pollutants, and occupational exposures across the adult lifespan. Moreover, both advancing age and COPD are associated with increasing multimorbidity, polypharmacy, and functional decline, as well as recurrent hospitalizations and end-of-life decisions. Hence, the management of COPD in older persons requires an approach that considers the interactions between aging and disease and includes a multidisciplinary team of heath care providers.
ACKNOWLEDGMENT
The authors would like to formally acknowledge the important contributions of authors who contributed substantially to the previous version of this chapter, including Dr. Carlos A. Vaz Fragoso and Dr. Sean M. Jeffery.
FURTHER READING
Agusti A, Hogg JC. Update on the pathogenesis of chronic obstructive pulmonary disease. N Engl J Med. 2019;381(13):1248–1256.
Balte PP, Chaves PHM, Couper DJ, et al. Association of nonobstructive chronic bronchitis with respiratory health outcomes in adults. JAMA Intern Med. 2020;180(5):676–686.
Barnes PJ. Pulmonary diseases and ageing. In: Harris JR, Korolchuk VI, eds.
Biochemistry and Cell Biology of Ageing: Part II, Clinical Science,
Subcellular Biochemistry 91. Singapore: Springer Nature Singapore Pte Ltd; 2019:45–74.
Barrons R, Pegram A, Borries A. Inhaler device selection: special considerations in elderly patients with chronic obstructive pulmonary disease. Am J Health-Syst Pharm. 2011;68:1221–1232.
Celli BR, Wedzicha JA. Update on clinical aspects of chronic obstructive pulmonary disease. N Engl J Med. 2019;381(13):1257–1266.
Global Initiative for Chronic Obstructive Lung Disease: 2021 Report. www.goldcopd.org. Accessed June 30, 2021.
Halpin DMG, Criner GJ, Papi A, et al. Global Initiative for the Diagnosis, Management and Prevention of Chronic Obstructive Lung Disease: The 2020 GOLD Science Committee report on COVID-19 and chronic obstructive pulmonary disease. Am J Respir Crit Care Med.
2021;203(1):24–36.
Leuppi JD, Schuetz P, Bingisser R, et al. Short term vs conventional glucocorticoid therapy in acute exacerbations of chronic obstructive pulmonary disease. JAMA. 2013;309:2223–2231.
Lindenauer PK, Stefan MS, Pekow PS, et al. Association between initiation of pulmonary rehabilitation after hospitalization for COPD and 1-year survival among Medicare beneficiaries. JAMA. 2020;323(18):1–11.
Lowe KE, Regan EA, Anzueto A, et al. COPD Gene 2019: redefining the diagnosis of chronic obstructive pulmonary disease. Chron Obst Pulm Dis. 2019;6(5):384–399.
Macrea M, Oczkowski S, Rochwerg B, et al. Long-term noninvasive ventilation in chronic stable hypercapnic chronic obstructive pulmonary disease. An Official American Thoracic Society Clinical Practice Guideline. Am J Respir Crit Care Med. 2020;202(4):e74–e87.
Maddocks M, Lovell N, Booth S, et al. Palliative care and management of troublesome symptoms for people with chronic obstructive pulmonary disease. Lancet. 2017;390:988–1002.
Murphy PB, Rehal S, Arbane G, et al. Effect of home noninvasive ventilation with oxygen therapy vs oxygen therapy alone on hospital readmission or death after an acute COPD exacerbation: a randomized clinical trial.
JAMA. 2017;317(21):2177–2186.
Quanjer PH, Stanojevic S, Cole TJ, et al. Multi-ethnic reference values for spirometry for the 3–95 year age range: the global lung function 2012 equations. Eur Respir J. 2012;40(6):1324–1343.
Ritchie AI, Wdezicha JA. Definition, causes, pathogenesis, and consequences of chronic obstructive pulmonary disease exacerbations. Clin Chest Med. 2020;41(3):421–438.
Schneider JL, Rowe JH, Garcia-de-Alba C, et al. The aging lung: physiology, disease and immunity. Cell. 2021;184:1990–2019.
Silverman EK. Genetics of COPD. Annu Rev Physiol. 2020;82:413–431.
Spruit MA, Singh SJ, Garvey C, et al. An official American Thoracic Society/European Respiratory Society statement: key concepts and advances in pulmonary rehabilitation. Am J Respir Crit Care Med. 2013;188:e13–e64.
Strnad P, McElvaney NG, Lomas DA. Alpha-1-antitrypsin deficiency. N Engl J Med. 2020;382(15):1443–1455.
Vaz Fragoso CA, Gill T. Respiratory impairment and the aging lung: a novel paradigm for assessing pulmonary function. J Gerontol Med Sci.
2012;67:264–275.
Chapter
82
Aging of the Kidney
Jocelyn Wiggins, Abhijit S. Naik, Sanjeevkumar R. Patel
CLINICAL RELEVANCE
Data on individuals reaching end-stage kidney disease (ESKD) are collected by the US Renal Data System (USRDS). As a condition for coverage, all dialysis units receiving Medicare funding must file data with the Centers for Medicare and Medicaid Services (CMS). The 2020 USRDS annual data report shows that approximately 1.27 in 1000 persons aged 65 to 69 initiate treatment for ESKD each year. For the 70- to 75-year-old age group, the incidence rate of ESKD is 1.43 per 1000 persons, and this incidence peaks in the 80 to 84 age group to 1.82 per 1000. Over the last 10 years, the number of older individuals receiving renal replacement therapy has increased by 35% in those 75 years or older and by 43% in those older than 80 years. In contrast, the incidence of ESKD in the 20- to 44-year-old age group has remained flat over the last 10 years, with only modest growth in the 45- to
64-year-old age group. Although some of the increase in renal replacement therapy for the older population indicates a greater willingness to offer treatment to older individuals, much of the growth is owing to people surviving to experience the chronic changes that occur with aging. The kidney undergoes significant age-related change. Other common, age-related diseases such as hypertension and diabetes accelerate these changes.
THE AGING PROCESS
Nephrology
SECTION C
Aging in the kidney is characterized by changes in both structure and function. It must be emphasized that many of the aging studies have been performed on laboratory animals, particularly rodents, demonstrate quite different patterns of aging from humans. For example, kidney weight increases throughout life in rats, while kidney mass and size in humans peak in the fourth decade and decline thereafter. Care should be taken when reading the literature to keep in mind that changes observed in animal models may not reflect parallel changes seen in humans. Historical data from human postmortems describing changes in the kidney made no effort to exclude patients with kidney disease or significant comorbidities. More recently, data on aging have been developed from longitudinal studies, such as the Baltimore Longitudinal Aging Study, in which the medical histories of the
study volunteers are well documented. There are also data accumulating from older living kidney donors that undergo rigorous work-up of their kidney function and detection of comorbid conditions. Such individuals are presumed to have undergone healthy aging and thus allow the acquisition of normal kidney aging data. The aging kidney is generally characterized by a spontaneous progressive decline in renal function accompanied by thickening of the basement membrane, mesangial expansion, and progressive glomerulosclerosis.
Learning Objectives
Understand normal kidney aging.
Classify kidney disease using the estimated glomerular filtration rate (eGFR).
Recognize environmental factors that impact the rate of decline in kidney function.
Recognize that genetic factors play a role in the age-related decline of kidney function.
Understand the general guidelines for managing patients with chronic kidney disease (CKD).
Key Clinical Points
All older adults have some decline in renal function.
Older patients can typically maintain normal physiologic homeostasis but are compromised in their ability to respond to
Understand the implications of aging on organ donation and receiving an organ transplant.
challenges.
Kidneys become more susceptible to injury with advancing age.
Rates of decline in renal function in aging are quite variable and impacted by genetic and environmental factors.
Preventing people from reaching end-stage renal disease reduces care costs and dramatically improves the quality of life.
Functional Changes
Changes in renal function with age are well documented both in human and animal models. Although baseline homeostasis of fluids and electrolytes is maintained with normal aging, there is a progressive decline in renal reserve. This results in a compromise in the kidney’s ability to respond to either a salt or water load or deficit. This manifests clinically in patients being vulnerable to superimposed renal complications during acute illnesses.
Chronic conditions such as hypertension accelerate this age-related loss of renal reserve, and increased vulnerability in these patients should be anticipated. Age-related changes in function will be considered by separate functional domains within the kidney.
Renal blood flow Average renal blood flow decreases about 10% per decade, dropping from 600 mL/min/1.73 m2 to 300 mL/min/1.73 m2 by the ninth decade. This is accompanied by increasing resistance in both afferent and efferent arterioles. These changes occur independently of a decline in
cardiac output or reductions in renal mass. This decline in renal blood flow contributes to the decrease in efficiency with which the aging kidney responds to fluid and electrolyte load and loss.
Glomerular filtration rate Newer data have shown a wide variation in the rate and extent of changes in the kidney within the older population. Approximately 30% of the population shows no measurable decline in renal function with normal aging. The bulk of the population loses about 10% of the glomerular filtration rate (GFR) and 10% of renal plasma flow per decade after the fourth decade of life. Between 5% and 10% of the population shows an accelerated loss, even in the absence of identifiable comorbidities. Since there is also a steady loss of muscle mass with age, with a concomitant
reduction in creatinine production, serum creatinine should remain relatively constant. Elevations in serum creatinine should therefore be taken seriously and not dismissed as normal aging. As can be seen in Figure 82-1, serum creatinine at the upper limit of the “normal range” in an older individual represent significant decline in renal function, and thought should be given to renal dosing of medications. These curves were calculated using the Cockcroft-Gault equation for a 70-kg man:
FIGURE 82-1. Relationship between serum creatinine and calculated creatinine clearance for men aged 35, 65, and 85. Calculations are based on a 70-kg man.
Results for women should be multiplied by 0.85, which shift the curves downward. In frail older women with very little residual muscle mass, this equation probably overestimates GFRs. This steady decline in renal function
with age manifests itself clinically as impaired ability to excrete a salt or water load. Extra care should be taken when replacing fluids in an older individual to prevent extracellular fluid overload and its consequences.
In 1999, an improved formula for estimating GFR was developed known as the MDRD formula (because it was developed as part of the Modification of Diet in Renal Disease study). Many routine laboratories now automatically calculate an MDRD glomerular filtration estimate when a basic or a comprehensive metabolic panel is ordered. In some institutions, it is necessary to order a “renal panel” for the calculation to be done. It is essential to understand that this formula was based on data from community- dwelling volunteers aged 18 to 70. It has never been validated in a very old or frail population. Several investigators have studied its performance in older patients and compared its efficacy with Cockcroft Gault, creatinine clearances based on 24-hour urine or iothalamate clearances, or a combination of these methods. All of the studies have shown significant discrepancies between these methods in patients with advanced age and at both extremes of the weight spectrum. These limitations should be kept in mind when using the formula in clinical geriatric practice. Although iothalamate clearance is the gold standard, it is expensive and impractical for routine use. The most reliable results come from calculating creatinine clearances based on 24-hour urine collection. This will always be an overestimate of the GFR, since some creatinine is actively excreted into the urine from the proximal tubule, not all of the urinary creatinine is filtered. It is however more reliable than the formula estimations in the very old and frail people. In 2009, a new modification of MDRD—the CKD-EPI equation
—was developed. This is more accurate than the original formula and is now used in routine clinical practice reported by most clinical labs. Using cystatin C as a measure of GFR circumvents the problem of the decline in creatinine production with age, and its use has gained popularity in many clinical settings.
Classification of kidney disease The classification of kidney disorders has undergone a significant revision over the past few years. A consensus committee, sponsored by the National Kidney Foundation, published clinical practice guidelines in 2002. The traditional chronic renal insufficiency has become chronic kidney disease (CKD), and end-stage renal disease (ESRD) has become kidney failure. CKD is defined as either kidney damage or decreased kidney function for 3 or more months. Kidney failure is defined as
a GFR of less than 15 mL/min or the need to start kidney replacement therapy. Along with renaming kidney disease, the committee also developed a system of staging. It was felt that having a structure would help with standardizing diagnosis and opportunities for preventative management. CKD is now classified into five stages, regardless of underlying diagnosis. The classification defines stage 1 as kidney damage (primarily proteinuria) with preserved GFR and progresses to stage 5 kidney failure (Table 82-1).
Declines in GFR are accompanied by a broad range of complications (Table 82-2). In 2007, the classification was revised and stage 3 CKD was subdivided into 3A and 3B. This was done because this is by far the category with the largest number of patients and there was significant heterogeneity in complications that develop within this group. Early recognition of impaired kidney function allows the physician to screen for and manage these complications and thus prevent comorbidities and declines in quality of life. National Kidney Foundation guidelines recommend referral to a nephrologist when a patient reaches stage 4 CKD for management of the complications of impaired function such as acidosis, phosphorus retention, and anemia. An informed discussion regarding kidney replacement therapy should also begin during stage 4.
TABLE 82-1 ■ NATIONAL KIDNEY FOUNDATION CLASSIFICATION OF CHRONIC KIDNEY DISEASE
TABLE 82-2 ■ COMPLICATIONS OF CHRONIC KIDNEY DISEASE
Proteinuria Despite the significant decline in GFR that occurs with aging, proteinuria is not a normal feature of the aging process. Proteinuria is always a pathological finding and requires a full work-up. In contrast, in most rodent models, particularly in the rat, proteinuria is a normal feature of the aging kidney. This difference between humans and rodents should be kept in mind when reading the aging literature.
Tubular function Older individuals are more susceptible to acute renal failure. Much of the information on tubular function comes from animal studies, particularly rat models. Rats spontaneously develop proteinuria with aging, and this protein load is believed to be toxic to the tubule. Since proteinuria is not a feature of normal aging in humans, these animal studies may not paint an accurate picture of changes in tubular function in humans. There are also large numbers of studies in experimental animals looking at vasoconstrictive and vasodilatory responses in the older kidney. Impaired response to ANP, acetylcholine, and blunted responses of cAMP to β-adrenergic stimulus has all been implicated. Virtually none of these findings have been confirmed in humans. Functional magnetic resonance imaging (MRI) in older volunteers has demonstrated decreased ability to modulate renal medullary oxygenation. Whether this is caused by fixed vascular changes or changes in renal autocrine systems such as prostaglandins, dopamine, nitric oxide (NO), natriuretic peptides, or endothelin is not clear. The clinical result is increased sensitivity to acute ischemic renal failure.
Animal and human studies have shown impaired concentrating ability in the older kidney. Whether this is caused by intrinsic defects in the tubular epithelium or impaired response to antidiuretic hormone (ADH) is not clear. Studies have also demonstrated impaired capacity to acidify urine manifested clinically as reduced excretion of an acid load. Whatever the underlying mechanism, older individuals are less likely to be able to maintain normal homeostasis when challenged. Although there is an age-related decline in tubular functions such as glucose and amino acid transport, these declines closely parallel the decline in GFR and are believed to correlate with the loss of nephrons rather than age effects in the tubule. Older individuals are also more sensitive to nephrotoxic injury. Careful thought should be given to the choice and dosing of antibiotics and other nephrotoxic drugs including the use of iodinated contrasts.
Age-related changes in sodium and water handling are discussed in Chapter 39. An overview of potassium disorders in the older adults is presented below.
Physiologic Regulation of Potassium Balance
As the major determinant of transmembrane potential, and therefore neural, muscular, and neuromuscular currents, both total body potassium homeostasis and the extracellular-to-intracellular potassium gradient are strongly defended. Potassium is the major intracellular cation and the vast majority of body potassium stores are located in the skeletal muscle cells. Thus, serum potassium concentration reflects total body stores imperfectly, especially under stress conditions. Serum potassium is influenced by two separate sets of factors: those related to kidney function and those related to the transmembrane movement of potassium in and out of cells. In the setting of relatively well-preserved kidney function, older individuals have a normal serum potassium under unstressed conditions, but as the GFR declines, the fractional excretion of potassium does not rise as much as in young individuals in part due to lower aldosterone levels and/or aldosterone resistance in older adults. Several factors will regulate the movement of potassium in and out of cells including the glucoregulatory hormones insulin and glucagon, adrenergic stimulation of either α- or β-receptors, acid-base balance, and alterations in serum osmolality.
Hypokalemia, defined as a serum potassium less than 3.5 mEq/L, occurs rarely in healthy adults. However, because older individuals frequently are
taking medications that alter potassium homeostasis such as diuretics or have a diet low in potassium, the frequency of hypokalemia can be as high as 5%. Hypokalemia associated with medications, underlying disease processes, or diet has been reported in 21% of hospitalized patients. Although many individuals with hypokalemia are clinically asymptomatic, hypokalemia is associated with a wide variety of clinical manifestations including muscle weakness—including proximal muscle weakness and intestinal ileus— cardiac arrhythmias, polyuria, and fatigue. Hypokalemia is frequently accompanied by multiple electrolyte abnormalities including hyper- or hyponatremia, metabolic acidosis or alkalosis, and hypomagnesemia.
Individuals with very poor nutrition such as those with alcoholism or eating disorders may present with normal or minimally decreased potassium that belies the extent of their potassium deficiency. The potassium level may plummet with refeeding, leading to potentially catastrophic cardiac events, especially if this has occurred in the setting of underlying organic heart disease. A similar dramatic fall in potassium can occur with aggressive insulin therapy or aggressive bicarbonate replacement. For these reasons, it is recommended that potassium replacement, parenteral, enteral, or both, be initiated immediately when these therapies are anticipated. Avoidance of dextrose-containing parenteral fluids should be considered in the setting of severe hypokalemia, until it can be at least partially corrected.
Hypokalemia can result from poor intake (eating disorders, alcoholism), increased losses (diuretics, mineralocorticoid excess, diarrhea), or shift of potassium from the extracellular to the intracellular compartment (insulin therapy). Often, the history alone will alert the clinician to the most likely cause of hypokalemia, but additional testing of the urine for potassium, creatinine, and osmolality can effectively narrow the differential diagnosis by establishing whether there are excessive urinary potassium losses. An isolated urine potassium of greater than 40 mEq/L suggests excessive renal losses.
The safest method for potassium replacement is orally, through potassium-rich foods or potassium supplements, as overdose is unlikely. For severe hypokalemia or hypokalemia associated with acute cardiac arrhythmias, however, parenteral potassium may be required. Determining the amount of potassium needed to replace losses may be calculated, but administration of relatively small doses such as 10 or 20 mEq at a time with frequent repeat measures is the safest approach. For patients who require
continuous diuretic therapy, addition of a potassium-sparing diuretic in low dose may blunt potassium losses. However, these agents need to be used carefully in individuals with CKD or those who are already taking a medication that blunts renal potassium excretion.
Hyperkalemia rarely occurs in the setting of normal kidney function because of the tremendous capacity of the kidneys to excrete the daily potassium load. Older patients are more susceptible to hyperkalemia because of the loss of kidney function, the loss of muscle mass, the changes in regulation of muscle ion content described earlier, lower levels of renin and aldosterone, and the use of many medications for chronic conditions which are associated with hyperkalemia such as angiotensin-converting enzyme (ACE) inhibitors, angiotensin receptor blockers, and potassium-sparing diuretics. Older individuals also are more likely to have glucose intolerance, which may blunt the ability of potassium to be translocated to the intracellular compartment and type IV renal tubular acidosis which will diminish renal potassium excretion even in the presence of relatively normal kidney function. Hyperkalemia is seen in up to 10% of hospitalized patients. Despite the theoretical predisposition to hyperkalemia, it has been difficult to show that older patients using inhibitors of the renin-angiotensin-aldosterone system, singly or in combination, have a higher incidence of hyperkalemia than do younger patients.
The definition of hyperkalemia varies from laboratory to laboratory and from clinical report to clinical report. Generally, a potassium of greater than
5.3 mEq/L is accepted as hyperkalemia. Spurious hyperkalemia, where the measured potassium is elevated but the ambient serum potassium is normal, can occur in the setting of hemolysis of the blood sample associated with difficult phlebotomy or deterioration of the sample, severe thrombocytosis, severe leukocytosis, or red cell disorders. If this is suspected, obtaining a plasma potassium would be the next appropriate step. Hyperkalemia is frequently asymptomatic but muscle weakness and fatigue are the most common symptoms. The heart is the major target of significant hyperkalemia as increasingly greater potassium concentrations result in varying degrees of heart block progressing to cardiac arrest. Unfortunately, the electrocardiographic manifestations of hyperkalemia may be subtle and missed; conversely, the same level of hyperkalemia may produce vastly different effects on the electrocardiogram (ECG) including sinus bradycardia, complete heart block, wide QRS tachycardia, or the classic
sine wave pattern. The widely taught progression of the ECG manifestations of hyperkalemia such as peaked T waves, prolonged PR interval, absence of P wave, and QRS prolongation is seldom seen. Thus, the absence of ECG findings in the presence of a high potassium level should not lead the clinician to conclude that the hyperkalemia is erroneous or insignificant. The physical examination may show muscle weakness, a reduction in deep tendon reflexes, and bradycardia.
The causes of hyperkalemia are numerous. Older patients are generally more prone to hyperkalemia primarily due to the reduction in kidney function. Most cases of significant hyperkalemia are multifactorial due to a reduction of kidney function coupled with excessive potassium intake, use of medications that reduce potassium excretion or movement into cells, or occurrence of acute kidney injury superimposed on CKD. A complete history can generally identify these factors. Recent studies have identified particularly high-risk situations for severe hyperkalemia in older adults including the use of trimethoprim/sulfamethoxazole in patients on spironolactone, the use of two inhibitors of the renin-angiotensin-aldosterone axis, the use of nonsteroidal anti-inflammatory drugs, and the presence of type 2 diabetes. One additional diagnosis to consider when an older patient presents with unexpected hyperkalemia is adrenal insufficiency. Modest adrenal insufficiency in the geriatric population may present episodically with recurrent volume depletion and acute kidney injury with varying degrees of hyponatremia and hyperkalemia which correct easily with the administration of parenteral sodium chloride. Often, the only symptom is fatigue.
Treatment of hyperkalemia is determined by the severity and the presence or absence of significant cardiac effects. Asymptomatic, non–life-threatening hyperkalemia can be approached in the outpatient setting through elimination of medications that cause hyperkalemia, improving hydration status, and addressing any reversible causes of reduction in kidney function such as medications and volume depletion. Severe hyperkalemia manifested as advanced degrees of conduction delays or bradycardia will require, in addition to the above measures, more immediate interventions. Intravenous calcium gluconate will stabilize the myocardial cell membrane, decreasing the potential for cardiac arrest. This effect is essentially immediate, but transient, and does not lower potassium levels. Intravenous insulin, inhaled
β-agonists, and, if the individual also has a metabolic acidosis, intravenous
bicarbonate, will enhance potassium entry into the cells, thus lowering serum potassium and diminishing the risk of a fatal cardiac event. Insulin works fastest, within 15 minutes, but is more transient than the β-agonists. All of these interventions are transient, and ultimately treatment is directed toward removal of excessive potassium from the body. The three options listed in order of aggressiveness are forced diuresis with loop diuretics, the use of exchange resins such as sodium polystyrene sulfonate, or dialysis. The choice of which modality or modalities to employ is dependent on the severity of hyperkalemia, the level of kidney function, the presence or absence of any intestinal disease, and patient/physician preference. Without doubt, the least invasive approach, forced diuresis, is highly effective in the presence of preserved kidney function. When kidney function is impaired, exchange resins and/or dialysis can be considered. For severe hyperkalemia, the immediate therapies as well as oral exchange resins would be required.
Underlying Structural Changes
There are at least 100 years of meticulous research describing the anatomical changes that underlie the functional changes that are observed in patients as they age.
Gross anatomy Kidneys grow vigorously from birth through adolescence, reaching their maximum weight and volume during the third decade of life. In humans, although not in most laboratory animals, renal mass starts to decline after the fourth decade and continues its decline throughout the remaining lifespan. Most of the decrease in weight and volume appears to happen in the cortex, with relative sparing of the medulla.
Glomerulus The young healthy human kidney contains roughly 1 million nephrons. There is no evidence for postnatal nephrogenesis. This underlies the hypothesis that low birth weight babies might have fewer initial nephrons and as a result are more susceptible to renal failure in later life. Although there are some observational data to support this hypothesis, no causal relationship has been proved. There is a steady decline in nephron number with age that starts around the fourth decade. This appears to be driven by podocyte loss. These specialized cells are postmitotic and if damaged in any way, cannot be replaced. This decline is believed to underlie the reduction in GFR discussed above. Kidneys obtained at autopsy from patients with no known history of renal disease have been studied. Light microscopy showed the development of a focal sclerosing process, accompanied by thickening of
the glomerular basement membrane. There was a steady progression with age in the percentage of glomeruli that were scared. By age 50, all subjects examined had some evidence of sclerosis, with the percentage of sclerotic glomeruli increasing steadily with age.
Age-related glomerulosclerosis Sclerotic glomeruli typically first appear in the fourth decade of life. This starts as a segmental process with one part of a glomerulus becoming acellular and the normal architecture being replaced by extracellular matrix. The glomerular tuft becomes adherent to Bowman capsule (Figure 82-2). Gradually an entire glomerulus becomes sclerosed and shrivels down with resultant loss of that nephron and its filtration capacity. It is not known what triggers this pattern of focal sclerosis, which is apparently randomly scattered throughout the cortex. The glomerular tuft increases in size with age. Concomitant with this expansion is an increase in endothelial and mesangial cells, such that the ratio of cells to glomerular area remains constant. Podocytes, the specialized cells that form the filtration barrier in the glomerulus, are postmitotic. They are not able to multiply in response to the increase in tuft volume and become a progressively smaller percentage of the total cells making up the glomerular tuft. As the filtration area that they have to cover increases, it is believed that they may detach from the basement membrane, leaving a denuded area behind. It is this area of bare basement membrane that acts as the trigger for the sclerosing process. Many different experimental models of glomerulosclerosis have concluded that loss of the podocyte and its inability to be replaced are the sentinel events that trigger sclerosis. A transgenic rat model that expresses the diphtheria toxin receptor on the podocyte has been developed to deplete podocytes in a dose-dependent manner. This model has shown that the loss of podocytes precedes the appearance of sclerosis, and podocytes markers can be detected in the urine prior to the development of global sclerosis.
FIGURE 82-2. Renal glomeruli from a 24-month Fischer 344 rat stained with a podocyte marker, GLEPP1, and counterstained with PAS. Left panel: normal glomerulus showing normal architecture of the glomerular tuft. Right panel: age-related glomerulosclerosis showing normal cellular architecture replaced by extracellular matrix and adherence to Bowman’s capsule.
Models of induced glomerular injury, of course, do not exist in humans. However, research has shown selective loss of podocytes in the kidneys of people with type 1 diabetes as diabetic nephropathy progresses. Podocyte number per glomerulus is the best predictor of progression of diabetic nephropathy in Pima Indians with type 2 diabetes. Studies of the aging process in rat kidneys noted that a decline in podocyte counts is accompanied by the appearance of glomerulosclerosis. Studies to address this in aging humans are currently being conducted.
Some authors have suggested a role for the mesangial cell in initiating the sclerosis process. Certainly with aging, there is an increase in mesangial matrix and in mesangial cell numbers. However, this increase is just as marked in strains of rat that do not develop age-related glomerulosclerosis, as it is in strains that do. Several investigators have studied mesangial cell activation in rat models of glomerulosclerosis and found little or none. This suggests that mesangial expansion is a benign manifestation of the aging process rather than a pathological one.
Tubule With the loss of the glomerulus, the tubular section of the nephron usually degenerates and is replaced by connective tissue. Tubular hypertrophy then occurs in the remaining nephrons, principally in the proximal convoluted tubule. This appears to result from both hypertrophy and hyperplasia. With thinning of the cortex, there is a decrease in tubule length and development of diverticuli in the distal convoluted tubule. As nephrons
are lost, there is generalized tubular interstitial fibrosis. The structure of the distal tubule does not appear to change significantly with age.
Vasculature Renal arteries undergo age-related thickening, similar to that seen throughout the circulation. Smaller arteries may become tortuous and show luminal irregularities. When a glomerulus becomes sclerosed, there is frequent formation of an arteriovenous shunt as the afferent and efferent arterioles develop a direct connection when the glomerular capillary is lost. This shunt is very important in maintaining medullary blood flow.
Physiologic studies in both animals and humans have documented a decline in renal blood flow and an increase in vascular resistance with age. Studies of renal perfusion in healthy older individuals from a pool of potential kidney donors have shown steady declines in renal perfusion with age that exceeded the reduction in renal mass, suggesting that declines in blood flow were a significant factor in the changes seen in renal function with age. The age-related increase in central arterial stiffness (discussed in Chapter 73) leads to increased forward transmission of a larger forward wave exposing the small arteries and microvessels in the renal parenchyma to damaging levels of pressure pulsatility and may contribute to kidney injury in aging.
Taken together, these changes contribute to the susceptibility of older individuals to acute renal failure, volume overload, and electrolyte abnormalities.
Infarcts may occur in the kidney, just as they do in other tissues of the body. Since one-fifth of the circulating volume passes through the kidney each minute, the kidney is also particularly susceptible to embolization. If other signs of embolization are visible clinically, it is highly probable that the kidney is also undergoing embolization, and embolic disease should certainly be kept in mind in an older individual with widespread vascular disease who demonstrates accelerated loss of renal function.
MECHANISMS UNDERLYING THE DECLINE IN KIDNEY FUNCTION
There is no clear-cut consensus about what mechanisms may underlie the structural and functional changes occurring in the kidney in the older population. It is fairly clear, however, that there are both predisposing genetic and environmental factors that play a role.
Genetic Predisposition
There are as yet no genes known to cause age-related glomerulosclerosis, although several podocyte genes have been identified as causative in childhood focal segmental glomerulosclerosis. Accumulating evidence from animal studies, combined with evidence of genetic predisposition in humans, has led to a concerted effort to seek genes that may increase susceptibility to renal failure. Many of the genes identified as playing a role in the aging process, such as IGF-1 and target of rapamycin (TOR), also seem to play a role in kidney aging. Rodent studies using rapamycin to slow the aging process are currently ongoing that are likely to produce information on renal aging.
Animal models Rats are particularly susceptible to kidney failure and much of the work with models of renal disease has been carried out in laboratory rat strains. There are very marked strain differences in susceptibility to age- related glomerulosclerosis. Since these rats are maintained in pathogen-free environments and are fed uniform scientifically developed diets, this strongly suggests a genetic basis for the development of age-related glomerulosclerosis. The appearance of glomerulosclerosis has been reported as early as 5 months in the Milan normotensive rat, with extensive disease by 10 months of age. This occurs in the total absence of hypertension and is not ameliorated by administration of ACE inhibitors. Wistar rats were used as controls in this study and showed no significant disease during the same time period. In Sprague-Dawley rats, spontaneous age-related glomerulosclerosis first becomes apparent around 9 months of age, and disease became widespread by 18 months of age. In studies of aging in Fischer 344 rats, little glomerulosclerosis is seen until almost 2 years of age with fairly rapid progression thereafter. Other rat strains appear remarkably resistant to renal disease. Brown Norway rats show minimal sclerosis, even at 32 months of age, an advanced age in rat lifespan.
Human studie s Clearly, these kinds of studies cannot be duplicated in humans. However, there are observational data that would support a similar variation in genetic susceptibility in humans. Cross-sectional studies, donor organ data, and longitudinal studies clearly show a wide variation in kidney function with age. Around 5% to 10% of the population show accelerated loss of kidney function with age even in the absence of accelerating factors such as hypertension, while 30% show no measurable decline. In the presence of predisposing comorbidities, there is also wide variation in the
development of kidney disease. Some people with diabetes may never develop nephropathy, while others develop rapidly progressive kidney disease early in the course of their diabetes, suggesting an underlying genetic susceptibility. Within the African-American population, rates of kidney disease are much higher than in the Caucasian population independent of the precipitating cause. Within an ethnic group, there are also distinct differences in vulnerability. An African-American who develops any predisposing disease, be it hypertension, diabetes, or lupus, who has a first-degree relative on renal replacement therapy has a ninefold increased risk of developing kidney disease compared to another African-American with the same disease burden who has no family history of kidney disease. Similarly, human immunodeficiency virus (HIV)-associated glomerulosclerosis occurs almost exclusively in the African-American population, while HIV- associated mesangial hyperplasia and immune-complex glomerular nephritis occur equally in all ethnic groups. Much of this disparity can now be attributed to genetic variants in the apoL1 (APOL1) gene found only in individuals with recent African ancestry. These variants greatly increase rates of hypertension-associated ESKD, focal segmental glomerulosclerosis, HIV-associated nephropathy, and other forms of nondiabetic kidney disease. Thus, there is evidence for a genetic predisposition with respect to the development of glomerulosclerosis. Considerable effort and resources are currently being directed toward identifying genes that predispose to kidney disease.
Environmental Predisposing Factors
Several diseases predispose to kidney failure and accelerate the progress of age-related glomerulosclerosis. By far the most prevalent of these are hypertension and diabetes, both common disorders in the older population. There are, however, several other mechanisms that have been postulated to underlie the aging changes in the kidney.
Diet One of the most striking aspects of rodent models of age-related nephropathy is its complete reversal with caloric restriction. Both the anatomical and functional changes related to aging in the kidney are completely abolished in animals that are fed two-thirds of the calories given to their ad libitum-fed litter mates. Even though these animals live one-third longer than their ad libitum-fed littermates, they do not develop age-related
glomerulosclerosis. Several explanations have been proffered to account for this observation.
Free radicals and lipid peroxides One possible explanation for the profound effects of caloric restriction is a reduction in the generation of free radicals and lipid peroxides. There is a wide body of literature discussing the damaging effects of free radicals on cellular systems and the role that this plays in aging (refer to Chapters 1 and 40). The main consequence of free radical production is lipid peroxidation, which results in damage to cellular proteins, lipids, and nucleic acids. Increased caloric intake is believed to fuel increased free radical production with accelerated aging damage. This hypothesis has generated interest in the role of antioxidants in slowing the aging process. The effects of supplementing the diets of Sprague-Dawley rats with vitamin E have been studied. Although there were reductions in markers of oxidative stress and the rate of decline in GFR slowed, glomerulosclerosis still developed. Vitamin E supplement studies in humans have also been disappointing.
Protein restriction The benefits of caloric restriction have been attributed to concomitant reductions in dietary protein. There is a large body of older literature on protein restriction in experimental animal models of kidney disease. In many of these studies, experimental and control animals were not fed isocaloric diets, and protein restriction also meant caloric restriction.
Many of these studies have shown a slowing in the progression of established kidney disease; however, the results were not corrected for total calorie content. Studies of spontaneous age-related glomerulosclerosis in Fischer 344 rats have shown that protein restriction was much less effective than caloric restriction in preventing age-related declines in kidney function. Modest benefits from protein restriction when rats were fed isocaloric diets have been demonstrated, as have some benefits when the type of protein in the diet was changed from casein to soy. In contrast, caloric-restricted animals had little or no decline in kidney function and they were not able to show significant glomerulosclerosis despite significantly increased longevity. Rats that were fed a high-protein diet, but restricted to 60% caloric intake compared to their ad libitum-fed litter mates, also showed dramatic reductions in age-related glomerulosclerosis. Clearly, protein restriction does have some benefit in the prevention of age-related nephropathy, but that advantage is small compared to that achieved with caloric restriction. Very few of the studies examined changes in sodium,
phosphate, and calcium content of the experimental diets. Many of the results of protein restriction can be duplicated by phosphate restriction. The relevance of these studies in humans remains unclear. An observational study that included individuals up to 80 years of age compared healthy vegetarians who consume an average of 30 g/day of protein with nonvegetarians who consume an average of 100 g/day and showed no differences in kidney function between these groups. There is evidence to support protein restriction in patients with established renal disease to reduce symptoms of uremia, but none to support the role of protein restriction to prevent age- related changes in the human kidney.
Lipids There is a well-established link between lipids and cardiovascular disease, and restriction of fat intake accompanied by treatment of hyperlipidemia has been shown to be efficacious in preventing or slowing the progress of the cardiovascular disease. Certainly, protecting the integrity and function of the vascular supply to the kidney is important to maintaining normal function. Evidence for benefits from the manipulation of lipids in kidney disease comes mainly from animal models of diabetes. Animal studies using high-fat diets have shown accelerated progress of kidney disease, but in most cases, diets were not corrected for total caloric intake. Lipogenic diets fed to Sprague-Dawley rats resulted in an earlier appearance of widespread glomerulosclerosis compared to standard fed animals. The use of lipid-lowering agents in a variety of animal models of glomerulosclerosis has shown reductions in the incidence of glomerular damage. Patients with an established renal disease with or without diabetes have more rapid deterioration of kidney function in the presence of hyperlipidemia. The relevance of lipids to the age-related decline in kidney function remains to be established, but it would certainly be reasonable to recommend a low-fat diet and lipid management in patients with declining renal function.
Hyperfiltration The term hyperfiltration is used to describe putative glomerular injury from long-term increases in intraglomerular pressure. The age-related loss of glomeruli causes intraglomerular hypertension with hypertrophy of remaining glomeruli. Persistent intraglomerular hypertension causes pressure-mediated renal injury. Most of the supporting evidence for this mechanism comes from animal models where one kidney and part of the remaining kidney are removed, leaving a partial kidney remnant. These animals develop a pattern of renal damage in the remnant indistinguishable
from age-related glomerulosclerosis but over an accelerated time course. Long-term follow-up in humans for older than 20 years has not shown accelerated declines in renal function in people who have donated one of their kidneys for transplantation, even though the remaining kidney does undergo hypertrophy. Nutrition may also play a part in hyperfiltration. After a meal of protein, increases in both renal blood flow and GFR in animals as well as in humans have been demonstrated. Excessive intake, particularly of animal proteins, therefore could cause constant hyperperfusion in the kidney, leading to intraglomerular hypertension and accelerated glomerulosclerosis. This would certainly help to explain the benefits so clearly seen with caloric restriction in laboratory animals.
The efficacy of ACE inhibitors in preventing renal hyperperfusion damage early in the course of diabetes and hypertension would also lend support to the hyperfiltration hypothesis. Angiotensin II appears to be important in maintaining glomerular filtration pressure by vasoconstricting the efferent arteriole. ACE inhibition is believed to preserve renal function by blocking the vasoconstriction of the efferent arteriole and reducing intraglomerular pressure. Long-term ACE inhibition dramatically reduces the incidence of age-related glomerulosclerosis in Munich-Wistar rats.
However, a sufficient dose of an ACE inhibitor to significantly lower systolic blood pressure was administered in the treatment group as compared to the control animals. Whether low doses of ACE inhibitors would help to maintain normal renal function in humans and prevent the appearance of age- related glomerulosclerosis is a matter for speculation. There is no doubt about their efficacy in preventing a progressive decline in renal function when there is an underlying disease. It remains to be seen whether they have any role in modifying age-related changes.
CONSEQUENCES OF IMPAIRED KIDNEY FUNCTION
Patients who show signs of significantly diminished kidney function should be managed aggressively, regardless of their age. Individuals who reach dialysis have a mortality rate four to five times that of age-matched controls (Table 82-3) and are at 10 times the risk of a cardiovascular event. It costs more than $80,000 per year to maintain someone on dialysis without including the cost of treatment for any other health problems. Transplantation is also expensive and requires patients to remain on toxic
immunosuppressive regimens for the rest of their lives. Patients who show signs of impaired kidney function should be managed with the idea of preventing them from reaching end-stage disease. Aggressive measures should be taken to reduce blood pressure with a target systolic blood pressure goal of less than 120 mm Hg. A regimen should be used that includes an ACE inhibitor or an angiotensin receptor blocker, particularly in patients who have albuminuria. However, it should be kept in mind that neither of these classes of the drug will confer significant renal protection in the absence of good blood pressure control. Lipids should also be aggressively managed, as should blood sugar and other potential accelerators of renal decline. The novel SGLT2 inhibitors reduce the risk of kidney disease progression among patients with diabetic kidney disease who are already taking ACE inhibitors, as well as the incidence of cardiovascular disease. Overweight patients should be encouraged to lose weight. Great care should be taken to avoid potential renal toxins, such as aminoglycoside antibiotics, nonsteroidal anti-inflammatory drugs (NSAIDs), and radiocontrast dyes. Medications excreted by the kidney should be appropriately dosed in amount and frequency. Maintaining residual renal function confers on the patient a greatly superior prognosis when compared to those on renal replacement therapies. Hopefully, the current emphasis on finding genes that predispose to declines in kidney function and the development of age-related glomerulosclerosis will help us to identify those at greatest risk before major losses of function have occurred. As with recent gains in the prevention of cardiovascular disease through control of risk factors, we anticipate that similar guidelines will be available for the prevention of age-related declines in kidney function.
TABLE 82-3 ■ LIFE EXPECTANCIES FOR SELECTED AGE GROUPS, COMPARING DIALYSIS PATIENTS WITH GENERAL POPULATION STATISTICS
Kidney Donation
Kidneys from older donors have lower life expectancy in recipients compared to those from younger donors. However, the ever-increasing demand and supply mismatch in organ transplantation has led to an increase in the utilization of organs from older live and deceased donors. For example, utilization of living donors older than 55 years has increased from approximately 13% in 2008 to 22% in 2020. At the same time the use of kidneys from older deceased donors has remained constant. This difference is in part due to concerns of accentuation of age-related changes when these organs are exposed to peri-donation and peri-transplant ischemia and reperfusion injury. Furthermore, in recipients these organs undergo additional immune and nonimmune stresses that likely cause further progression of age- related structural abnormalities and early graft loss or recipient death. The increased concern for utilizing older deceased donor organs is in part driven by concerns of being penalized by regulatory bodies for poor transplant outcomes. This is despite studies showing that kidney transplant recipients receiving such kidneys on average still do better than those who remained on dialysis.
Kidney Transplantation
The proportion of individuals older than 65 years who are now on the waitlist has increased from 18% in 2009 to 24% in 2019 suggesting an increasing burden of kidney disease in this age group, an increase in willingness by transplant centers to list such patients, as well as reduction in waitlisting mortality. Many transplant programs are routinely offering transplantation to people in their late sixties and early seventies if they are
otherwise in good health. This is reflected in the fact that donor transplant rates among those older than 65 years has increased from approximately 10 (per 100 waitlist years) in 2015 to 17 (per 100 waitlist years) in 2019.
In summary, kidney disease and failure are predominantly diseases of the older population. All older patients should have an estimate made of their GFR. If a deficit in their kidney function is identified, they should be managed aggressively to prevent progression to kidney failure. As CKD progresses, special attention should be paid to the choice of drugs and their dosing and to the use of contrast dyes for imaging.
FURTHER READING
Anderson AH, Yang W, Hsu CY, et al. CRIC Study Investigators. Estimating GFR among participants in the Chronic Renal Insufficiency Cohort (CRIC) Study. Am J Kidney Dis. 2012;60(2):250–261.
Bowling CB, Sharma P, Fox CS, O’Hare AM, Muntner P. Prevalence of reduced estimated glomerular filtration rate among the oldest old from 1988–1994 through 2005–2010. JAMA. 2013;310(12):1284–1286.
Fukuda A, Chowdhury MA, Venkatareddy MP, et al. Growth-dependent podocyte failure causes glomerulosclerosis. J Am Soc Nephrol.
2012;23(8):1351–1363.
Glassock RJ. An update on glomerular disease in the elderly. Clin Geriatr Med. 2013;29(3):579–591.
Naik AS, Afshinnia F, Cibrik D, et al. Quantitative podocyte parameters predict human native kidney and allograft half-lives. JCI Insight.
2016;1(7):e86943.
National Kidney Foundation. K/DOQI clinical practice guidelines for chronic kidney disease: evaluation, classification, and stratification. Am J Kidney Dis. 2002;39:S1–S246.
Organ Procurement and Transplantation Network (OPTN) and Scientific Registry of Transplant Recipients (SRTR). OPTN/SRTR 2019 Annual Data Report. Rockville, MD: Department of Health and Human Services, Health Resources and Services Administration; 2021. Abbreviated citation: OPTN/SRTR 2019 Annual Data Report. HHS/HRSA.
O’Hare AM. Measures to define chronic kidney disease. JAMA.
2013;309(13):1343.
O’Hare AM, Armistead N, Schrag WL, Diamond L, Moss AH. Patient- centered care: an opportunity to accomplish the “three aims” of the national quality strategy in the Medicare ESRD program. Clin J Am Soc Nephrol. 2014;9(12):2189–2194.
O’Hare AM, Hotchkiss JR, Kurella-Tamura M, et al. Interpreting treatment effects from clinical trials in the context of real-world risk information: end-stage renal disease prevention in older adults. JAMA Intern Med. 2014;174(3):391–397.
Rule AD, Glassock RJ. Chronic kidney disease: classification of CKD should be about more than prognosis. Nat Rev Nephrol. 2013;9(12):697– 698.
Rule AD, Glassock RJ. GFR estimating equations: getting closer to the truth?
Clin J Am Soc Nephrol. 2013;8(8):1414–1420.
Treit K, Lam D, O’Hare AM. Timing of dialysis initiation in the geriatric population: toward a patient-centered approach. Semin Dial.
2013;26(6):682–689.
United States Renal Data System. 2020 annual data report: an overview of the epidemiology of kidney disease in the United States. National Institutes of Health, National Institute of Diabetes and Digestive and Kidney Diseases, Bethesda, MD, 2020.
Wiggins JE, Patel SR, Shedden KA, et al. NFkappaB promotes inflammation, coagulation, and fibrosis in the aging glomerulus. J Am Soc Nephrol.
2010;21(4):587–597.
Yaffe K, Ackerson L, Kurella-Tamura M, et al. Chronic Renal Insufficiency Cohort Investigators. Chronic kidney disease and cognitive function in older adults: findings from the chronic renal insufficiency cohort cognitive study. J Am Geriatr Soc. 2010;58(2):338–345.
Chapter
83
Kidney Diseases
Mark Unruh, Nitin Budhwar
INTRODUCTION
Given the relationship of kidney disease with age and chronic conditions, older adults have a higher frequency of both chronic kidney disease (CKD) and acute kidney injury (AKI). The older adult with kidney injury should be systematically evaluated and treated for AKI and CKD. As the population ages, there will be an increased number of patients with advanced CKD and kidney failure. The older patient with kidney failure now has a multitude of choices for renal replacement therapy (RRT) including hemodialysis, peritoneal dialysis, conservative management, palliative care, and kidney transplantation. Primary care physicians and geriatricians should identify CKD in older patients and have their care managed by a multidisciplinary team. The geriatrics team remains in the key role of addressing advance directives and planning long-term care and continued management of their nonrenal problems. Older patients with CKD will remain an economic and medical challenge, and a multidisciplinary approach to care for these patients will provide the best long-term outcomes.
AGING KIDNEYS
The kidneys undergo anatomic and physiologic changes that are not only the consequences of normal organ senescence but also of specific diseases that occur with greater frequency in older individuals. Structural changes that occur at an increasing rate with age include glomerular hypertrophy and glomerulosclerosis, tubular atrophy, and interstitial fibrosis as well as arteriolosclerosis. Functionally, several longitudinal studies have shown that
glomerular filtration rate (GFR) declines in the majority of people with older age independent of other chronic diseases. Additional details are provided in Chapter 82, Aging of the Kidney.
CHRONIC KIDNEY DISEASE (CKD)
Definition
CKD is a general term for heterogeneous disorders affecting the kidney’s structure and function. CKD is now recognized as a worldwide public health problem. Its management in early stages is tasked to the general practitioners and geriatricians. The presentation of patients with CKD can vary and is related to disease etiology, severity, and rate of progression. CKD is classified into stages of disease severity, which are based on GFR described in clinical practice guidelines (Figure 83-1).
FIGURE 83-1. Prognosis of chronic kidney disease (CKD) by glomerular filtration rate (GFR) and albuminuria category. (Reproduced with permission from Kidney Disease: Improving Global Outcomes (KDIGO) Diabetes Work Group. KDIGO 2020 Clinical Practice Guideline for Diabetes Management in Chronic Kidney Disease. Kidney Int. 2020;98[4S]:S1–S115.)
Recognize that chronic kidney disease (CKD) is most often caused by common systemic diseases, including diabetes and hypertension.
Understand that diabetic nephropathy is a chronic progressive kidney disease that requires treatment with angiotensin-converting enzyme inhibitor (ACEI) or angiotensin receptor blockers (ARBs) if tolerated, optimization of blood pressure and blood glucose levels, and management of comorbidities. SGLT2 inhibitors and GLP-1 receptor agonists show benefits in treatment of diabetes and substantial reduction of risk of kidney disease progression.
Assess acute kidney injury (AKI) for pre-, post-, and intrarenal causes among the older adult. Even episodes of mild AKI can increase the risk for future CKD and the etiology of AKI is often related to the sex, age, and location of the patient.
Understand the prognosis of older adults with kidney failure.
Characterize the approaches to managing end-stage kidney disease (ESKD) among older adults.
Describe the influence of multiple chronic health conditions on quality of life and functioning among older adults with kidney failure.
Recognize the challenges in providing care to older patients with kidney failure and concurrent cognitive impairment and frailty.
Key Clinical Points
In patients with CKD, management of elevated blood pressure and avoidance of nephrotoxins can delay progression to ESKD.
Complications of CKD include iron-deficient anemia and secondary hyperparathyroidism.
Proteinuria in the absence of hematuria is a sign of kidney damage, is a risk factor for progression of CKD to ESKD, and requires evaluation.
In patients with renovascular disease, intervention is usually only indicated if conservative management fails.
Clinical guidelines suggest that patients with progressive CKD should be managed in a multidisciplinary setting. Nephrologist referral should be considered under the following circumstances: AKI, urinary red cell casts, CKD and refractory hypertension,
Discuss the role of clinicians and care teams in advance care planning for patients with ESKD.
Learning Objectives
persistent abnormalities of potassium, recurrent or extensive nephrolithiasis, hereditary kidney disease, CKD stages 4 and 5, and patients with severely increased albuminuria.
Older patients with kidney failure have a multitude of choices for kidney replacement therapy including hemodialysis, peritoneal dialysis, conservative management, palliative care, and kidney transplantation.
There are no age restrictions on access to kidney transplantation in the United States. Older kidney transplant patients tend to have less kidney allograft rejection and a higher complication rate from infections.
Kidney failure is associated with a markedly high mortality rate among older patients initiating dialysis—average 1-year survival rate is 50% for octogenarians and nonagenarians who start dialysis.
The criteria for CKD include an estimated GFR less than 60 mL/min/1.73 m2 (normal GFR in young adults is about 125 mL/min/1.73 m2; GFR < 15 mL/min/1.73 m2 is defined as kidney failure), albuminuria (urinary albumin- to-creatinine ratio > 30 mg/g), hematuria (any degree on urine dipstick), presence of urinary casts in the urine sediment (seen by microscopy of urinary sediment), and abnormalities on kidney imaging for a duration of more than 3 months.
It should be mentioned that some in the field have raised concerns that this classification overestimates the CKD burden in the older adult population when considering clinical outcomes. The equations estimating GFR have relied on creatinine, sex, age, and race as factors to predict kidney function. It has been determined that low estimated GFR (eGFR) and high albuminuria are independently associated with increased mortality and risk to develop end-stage kidney disease (ESKD) regardless of age across a wide range of populations. In addition, even smaller decreases in eGFR (such as < 30% reduction over 2 years) were strongly and consistently associated with increased mortality and risk of ESKD. Thus, CKD contributes significantly to
increased morbidity and mortality. However, patients with CKD are a very heterogeneous group and eGFR and albuminuria alone may not be sufficient to predict outcomes, especially in the older adult population. Therefore, care should be provided in context of the individual situation and needs.
Epidemiology
Even though the prevalence and incidence of kidney failure treated with dialysis and kidney transplantation is monitored closely in many countries, the estimation of the burden of early stages of kidney disease remains difficult. In the United States, the prevalence of CKD based on estimated GFR and albuminuria was about 11.5% (4.8% in stages 1–2 and 6.7% in stages 3–5) in the general population but was 47% in people older than 70 years. Therefore, CKD constitutes a major health concern in particular in older adults.
Pathophysiology (Classification)
Many systemic diseases can result in CKD, in particular diabetes mellitus and hypertension. Diabetic nephropathy (DN) and glomerular and tubule- interstitial diseases with defined etiologies and presentations will also be described. Hypertension can contribute to CKD (hypertensive nephropathy) and also accelerate progression of CKD due to other etiologies.
Presentation
CKD is usually asymptomatic until GFR falls below approximately 15 to 20 mL/min/1.73 m2 when patients may report early signs and symptoms of uremia. Initial symptoms are nonspecific and include decreased energy levels and appetite, metallic taste, worsening of lower extremity edema, and
new onset or worsening of nocturia. Some patients notice bubbly or dark urine as a sign of proteinuria or hematuria, respectively.
Evaluation
Kidney function is usually assessed by serum creatinine and blood urea nitrogen (BUN) levels, electrolytes, urinalysis, and urine protein-creatinine protein ratio (or quantitative albuminuria). If there are situations that may limit the accuracy of creatinine, one could use cystatin C to provide another estimate of GFR. If these tests are abnormal, further evaluation may include a kidney ultrasound with Doppler to determine kidney size (enlarged kidneys
can be seen in patients with polycystic kidney disease [PKD], infiltrative processes, amyloidosis, and diabetic and HIV nephropathy; small kidneys can be congenital or due to long-standing CKD), echogenicity (increased in long-standing CKD and infiltrative processes), renal blood flow (decreased with long-standing CKD or with occlusion of renal arteries including after renal infarct), and renal pelvis dilatation (in ureteral or severe bladder outflow obstruction).
Previous blood, urine, and imaging test results should be obtained if available to determine duration and rate of change. A detailed medical and surgical history should be obtained, including childhood diseases, in particular edema and proteinuria in the past and history of rheumatic fever or other severe conditions. The family history can point toward inheritable kidney diseases (including PKD and focal segmental glomerular sclerosis [FSGS]). Social history may identify exposure to toxins and herbal remedies. Additionally, the history can help to identify risk factors for progression of CKD to ESKD. A comprehensive physical examination can detect elevated blood pressure, obesity, signs of heart failure, pulmonary and peripheral edema, liver disease, vasculitis/autoimmune disease, etc.
Management
Significant progress has been made in establishing approaches that target underlying specific diseases with the goal of improving disease-related outcomes. However, features of geriatric populations such as complex comorbidities, life expectancy, functional state, and health priorities may limit the utility of disease-oriented models of care. Individualized patient- centered approach to care may have more to offer than a traditional disease- based approach to CKD in many older adults.
Attention should be given to cardiovascular disease, which is not only a risk factor for CKD, but also for progression of CKD. Blood pressure control has been shown to limit the progression of CKD. There are a number of guidelines that address target blood pressure goals in CKD with the recommendations aiming for systolic blood pressures between 120 and 130 mm Hg and diastolic under 90. Given the complexity of the geriatric patient, numerous factors need to be considered such as tolerance, cost, side-effect profile, polypharmacy, competing chronic conditions, and goals of care.
Pharmacologic Choice of antihypertensive medications should be guided not only for blood pressure control, but also for benefits related to delaying
CKD progression and controlling proteinuria, with an acceptable side-effect profile.
Blockade of the renin-angiotensin system (RAS) using ACEIs or ARB lowers blood pressure and also decreases proteinuria and is therefore the mainstay in patients with diabetes in whom DN is suspected. Combination of ACEI and ARB is associated with increased complications and is in general not advised. Two FDA-approved diabetes medications have also been shown to have significant reduction of risks associated with CKD—SGLT2 inhibitors and GLP-1 receptor agonists. Medications from these classes should be considered for patients with type 2 diabetes and CKD and the indications for use of SGLT2 may be expanding to other etiologies of kidney disease such as IgA nephropathy. They may also be of benefit in nondiabetics with CKD—but the data to support this is still emerging. In addition, management of hyperlipidemia with statins or statins plus ezetimibe is
recommended for older adults with eGFR less than 60 mL/min/1.73 m2.
Nonpharmacologic Patients with CKD are more likely to experience AKI, and AKI is a risk factor for progression to ESKD. Thus, preventing AKI is important to delay CKD progression. Avoidance of nephrotoxins is a mainstay of CKD management. In particular, nonsteroidal anti-inflammatory drugs (NSAIDs) can have detrimental effects especially if the patient is also taking ACEIs or ARBs, by decreasing glomerular perfusion and filtration rate. Contrast dye can cause tubular cell damage and AKI possibly leading to CKD. Many antibiotics can have nephrotoxic effects and cause AKI.
Obesity is associated with compensatory glomerular hypertrophy, which causes podocyte hypertrophy that may progress to podocyte dysfunction, decreased podocyte density, and accelerated loss of kidney function. CKD is associated with salt retention and a low-salt diet may slow progression of CKD independent of improving hypertension.
Prevention
Prevention is the ideal approach to prevent ESKD and its associated morbidity and mortality. High-risk patient subgroups may particularly benefit from implementation of measures to detect and prevent CKD. Patients with diabetes mellitus should be screened regularly for albuminuria and elevations in serum creatinine levels. Similarly, patients with hypertension, peripheral vascular occlusive disease, reduced kidney mass (ie, after a nephrectomy), or a family history of kidney disease should be regularly
evaluated. Patients with risk factors for CKD should be monitored for optimal blood pressure control, avoidance of nephrotoxins, polypharmacy, and episodes of AKI.
Protein in the Diet
In nondiabetic patients with CKD with less advanced disease (CKD stage 3 or lower), low-protein diets do not appear to reduce the progression to ESKD compared with normal-protein diets.
There is some evidence that low-protein diets (0.5–0.6 g per kg per day) or very low-protein diets (0.3–0.4 g per kg per day) may reduce the number of patients with advanced kidney disease (CKD stage 4 or 5) who progress to ESKD. This should be done with regular monitoring to prevent malnourishment.
Special Issues
Patie nt preference Once the eGFR falls below approximately 20 mL/min/1.73 m2, the risk to develop ESKD requiring dialysis support increases significantly and a discussion about patient preferences regarding dialysis should be initiated. In this context, the comorbidities and functional status of
the patient should be considered, because quality of life and functional status may not improve or could be worsened after initiation of dialysis in patients with high comorbidity burden and low functional status.
Comorbidity Management of comorbidities is important in the care of patients with CKD. One of the most important comorbidities is hypertension. With worsening kidney function secondary hyperparathyroidism can develop due to retention of phosphate and decreased activation of vitamin D causing parathyroid hormone (PTH) to increase. Therefore, serum phosphate, 25- OH-vitamin D, and PTH levels should be monitored. Hyperphosphatemia can be managed by low-phosphate diet. Oral phosphate binders, such as calcium acetate, sevelamer, or lanthanum carbonate, may have to be given if diet is not successful in maintaining normal phosphate levels. Decreased 25- OH-vitamin D levels should be treated with supplementation of cholecalciferol or ergocalciferol. Significantly, elevated PTH levels unresponsive to vitamin D supplementation may require treatment with active vitamin D analogs, including calcitriol, but have to be used with great care due to the risk of hypercalcemia and hyperphosphatemia. Patients with PTH
levels unresponsive to vitamin D supplementation may be evaluated for tertiary hyperparathyroidism.
Care se ttings CKD is primarily managed in an outpatient setting.
ACUTE KIDNEY INJURY
Definition
AKI is defined by an increase in creatinine (decrease in eGFR) within a few days or even hours, but it is important to understand that rising creatinine levels are likely not very sensitive. AKI is further classified as nonoliguric (> 400 mL/day), oliguric (100–400 mL/day), and anuric (< 100 mL/day).
This has prognostic implications as nonoliguric AKI is associated with better outcomes. AKI can be further distinguished by etiology such as prerenal, intrarenal, and postrenal.
Epidemiology
The incidence of AKI increases with age, with the majority of patients who develop AKI being older than 65 years. When AKI does occur in this age group, it is associated with significant morbidity and mortality. However, the epidemiology of AKI is difficult to determine consistently and detailed information about long-term outcomes is often lacking. Studies have shown that even modest increases (< 50%) of serum creatinine levels are associated with significantly higher risk of long-term CKD.
Pathophysiology (Classification)
Etiologies of AKI can be categorized into prerenal, postrenal (obstructive), and intrarenal (intrinsic) causes (summarized in Table 83-1). Prerenal etiologies include intravascular volume depletion due to fluid loss with decreased total body volume (ie, diarrhea, vomiting, and active bleeding) or low intravascular oncotic pressure (hypoalbuminemia secondary to nephrotic syndrome [NS], liver dysfunction, or malnutrition). Decreased cardiac output can also lead to a prerenal state even if the patient has peripheral and pulmonary edema. Patients with preexisting renal vascular disease are also at increased risk for prerenal AKI with milder degree of hypovolemia or hypotension.
TABLE 83-1 ■ ACUTE RENAL FAILURE IN OLDER ADULTS
PRER ENAL (ACUTEREVE RSIBI.E REJNAL H'ו'fO PElll'U:SION1
J"Jypoרi,ol�an.i11
Fl�•id ln&:
•��[ורoiווז��•-iו:גl Di11rrh,tייa ן;u1u..1.:.
Vomltlח�
ן¼ן�ן
Diun:וi,iח1:1k
[1,,111$lin�
lledistnbuוi1נ11 of tl1(! eיxtזaoו!llul�,r·olu111�
hock (.:;cplic. csroiQseיחic)
1-lypo:גlbLו.mחiוגגד.1,;�
ו::p.hrotic�yndmmc
r.lv�r dl.se.i.s.:s
M' Jnטt tt<כl'I
[-Jcומorrh��
l:1111ppr1גיprl11te iuid,� Lnc,tk!JJ1
l·ntc CNJl<:� ivlth ren"l יuto�ul:&to )' 111/Xh. זוisrn
ACF."f:;
י� �lnsrorlneי
:s וסs
c�rdm�f.ו1ו11זי-.ב
,;,וגtיe
A�L�rובוו\}·=\'d;;ןt inft1t ti n
1\.r;i'ly1h11נ.L,
C�rזlla t;וmpon;גde
Maligזרnnt !lyp,tvו�io1ר
Clוronlc: ciו.יemic וגונd hypertell\SI�
�ardi6MfL'Yן,MLthic�
YnlvuJטrathi{,�
FIENAL 011INTRINS!C
Acute slomenוLoncph ritis Mt��ווgi�ill�ry Pc)�(ip(t Li.Q11,;
R:a.pid1y progn1m,יe Goodp.i-�lw-t�y11d:ru1nc Jd opal.lג c
. LI!
awונliti:i
rt�·p�r;�uSiti\•i,y�11giitiS: C!::ו�ic
1,lcmוכlyLic-urcmit s.yoolmm� 1-l�no,tו--.רdבnnlein
Miג�d i:נ:)1סglטb1cliוitmi�
�cleroderווו<יו
� 1;[ �Jclfl � ,;kn,;M,
\>V<.11r:rו:·r�r::1נ1טlwח(l!usis: rסl�•גrt�rו[j• 1גodos.:
1\ו ' u.lomlt:r·titi�I ווt:pb:rop�hiוiי
Di-uss
A-יElנ AJlopurinol A11גpi illiוו
Aת:i!g1נ-l1;:1(:iוiclui;]iווg NS IL)1:1)
Cim@ii.cl1111I! Dlp1ר.::ננ ] lוy(l;antoi ,ו ן,.f�tlנkiJl'iנ1 'llנi:al.id.c.s
[סfectiou : �cu.te pyr.toneplנriti5 [n:fגJtr.;i, ive
u.יר.W�וnו:ו. 4'1ננplג�m;ן Sarruido�i!>
[tdii�thi�
[ntr"iltubul;ן.r ob (ru�tio Myclo.nrו.נ, p,ruteim f!.tyogloblin Stilfu:n,;i,וnicl�s
Uחו[eב Hyp"Qlו:1:rniוג IJep11tDח:i11<1I sy.ndrom�
OBSTRUc::זNE
זcleו:--al itחd p�י!Ivic זn1r\11 i<: nb5,truו::ti<:ttוr
Bloodi ul�
llשו.gu� bג.11.ז Sloushםt.l p.ו.pill�t
Dנ bפLi.a;
-יי��1, bt1�
1011·1'
E.,1riuSi(; Q\:ו:\O\rו.l.(;tM1 ו:!!cal וז�וpac!i(גח Malisnוtחcy
!t•trop riltJחeal fibfosi.
Bl�ild�r
Biadt1CI" Cגז'Cil!Oווו� B�,f(I N(:uwpl41]1k
Pl't»'talic ll}'p�rl.ropJנy
�t ws Urutl1r:-;1
Plliiוווט i
SUiC'\UJ·��
Postrenal etiologies for AKI include bladder outflow or ureteral obstruction. Obstruction of one ureter usually does not cause markedly elevated creatinine levels except in patients with some degree of preexisting CKD or only one functional kidney. In older adults, benign prostate hypertrophy and malignancies, including prostate cancer and cancer of cervix and uterus, have to be considered. In particular, bilateral ureteral obstruction without bladder retention is highly suspicious for a malignant mass in the lower abdomen.
Intrarenal causes of AKI can be caused by acute glomerular nephritis (GN) or tubule-interstitial nephropathies. Acute GN can be caused by rapidly progressive GN due to systemic vasculitis or autoimmune complex deposition in the kidney (see below). Tubulointerstitial nephropathies can be secondary to nephrotoxins, which cause acute tubular necrosis (ATN), intratubular obstruction, interstitial infiltration, or inflammation. ATN can be caused by a large number of nephrotoxins, including IV contrast dye, chemotherapeutic agents, and antibiotics, which are taken up by tubular epithelial cells leading to acute tubular cell damage. In the setting of trauma or hemolysis, free hemoglobin and myoglobin can cause ATN (pigment nephropathy). Tumor lysis syndrome is seen during treatment of cancer, in particular lymphomas and leukemias and is characterized by hyperkalemia, hyperphosphatemia, and hyperuricemia. Contrast-induced acute kidney injury (CI-AKI) represents a common form of iatrogenic AKI. Major risk factors for CI-AKI include old age, presence of CKD, diabetes, heart failure,
volume depletion, and concomitant exposure to other nephrotoxins. There has been research into the best strategies to mitigate the risk for CI-AKI when administering IV contrast. The primary approach is to avoid the exposure if
possible and confirm that a study with iodinated contrast generates adequate benefit to assume the risk. In higher-risk patients, approaches to attenuate the risk of CI-AKI include stopping any nephrotoxins, use of intravenous isotonic saline for volume expansion, and use of a low or iso-osmolar contrast agent.
Polypharmacy and drug toxicity exacerbate the susceptibility of older adults to AKI. Drugs commonly associated with AKI include NSAIDs, diuretics, ACEIs, ARBs, and antibiotics. Moreover, NSAID use doubles the risk for AKI in adults older than age 65. Older adults are also at increased risk for renal injury in particular due to hypovolemia, sepsis, and iatrogenic complications related to drug toxicity.
Presentation
Patients with AKI can be asymptomatic and detected solely by elevated creatinine levels or present with symptoms that result from various disturbances of kidney function. Patients may have decrements in urine output and signs of volume overload. In patients with more advanced stages of AKI, accumulation of substances cleared by the kidney can lead to uremic symptoms characterized by fatigue, loss of appetite, headache, nausea and vomiting, metallic taste, shortness of breath, asterixis, lethargy, and seizure.
Potassium levels can increase and lead to cardiac arrhythmias, which can be life threatening.
Evaluation
Kidney function should be examined in every patient at risk for AKI based on presenting symptoms. If creatinine is found to be elevated, values from previous tests are helpful to determine duration of renal dysfunction. Urine output has to be monitored accurately and catheterization of the bladder should be considered if clinically indicated as in case of bladder outflow obstruction. An ultrasound of the kidneys may be performed to evaluate for hydronephrosis and kidney size as well as echogenicity, which can be increased in both CKD and ATN. Doppler of the kidneys can determine blood flow (decreased or absent in renal artery or vein thrombosis) and resistive indices (increased in CKD and ATN). Urinalysis findings including casts, crystals, and tubular cells can be suggestive of tubular necrosis.
Prerenal volume depletion is supported by clinical examination, including hypotension, tachycardia, decreased skin turgor, and dry oral mucosa.
Management
The management of AKI is focused on treating the underlying cause and supportive medical management. Prerenal volume depletion is treated with volume repletion using isotonic solutions with careful monitoring of fluid status, respiratory function, and urine output. Postrenal causes acutely require placement of a bladder catheter for bladder outflow obstruction or placement of ureteral stents or nephrostomy for ureteral obstruction. The mainstay in the management of patients with intrarenal AKI is close monitoring of renal function, electrolytes, volume status, and urine output as well as signs or symptoms of uremia in order to determine need for dialysis support.
Diuretics can be used if needed to manage volume status and potentially avoid respiratory failure, but maintaining urine output above oliguric levels does not improve the outcome of AKI. Thus, diuretics should be used to address volume overload, and prevent respiratory failure. Hyperkalemia can often be managed nonacutely with low-potassium diet and diuretics. Dialysis support may be required in patients with AKI. Dialysis initiation is determined by the development of indications for dialysis, including electrolyte, acid–base and fluid volume abnormalities that are life- threatening and cannot be managed medically, and symptoms related to uremia.
Prevention
Older adults are at increased risk for AKI due to polypharmacy, high prevalence of preexisting CKD, and decreased oral intake (decreased thirst). Limiting polypharmacy is already a mainstay of geriatric care, but a particular focus is necessary for AKI prevention.
Special Issues
Patie nt preference AKI may require temporary dialysis support or lead to ESKD and permanent chronic dialysis dependence. It is important to discuss the patient’s preferences toward dialysis with sufficient time prior to initiation of dialysis, especially among older adults who initiate dialysis in the intensive care (ICU) setting due to a decreased survival rate.
Comorbidity Almost half of Medicare beneficiaries age 65 or older have three or more chronic conditions, and the number of preventable hospitalizations per 1000 beneficiaries increases with the number of chronic conditions.
Comorbidities that increase the risk for AKI are common among older
patients. Patients with both CKD and AKI tend to be older, have ischemic heart disease, and be less likely to recover kidney function than patients with AKI alone.
Care se ttings AKI is very common in the inpatient setting owing to the complications of an acute illness and resulting acute interstitial nephritis (AIN) and ATN. The underlying etiologies of AKI in the outpatient setting are different with obstruction and polypharmacy as more likely causes in the geriatric population.
GLOMERULONEPHRITIS (GN)
Some GN conditions have a second peak in the geriatric population such as small-vessel vasculitis and postinfectious glomerulonephritis (PIGN). GN can be classified into groups based on course, histopathologic findings, and disease etiology. Rapidly progressive GN (RPGN) can be associated with autoimmune processes, infectious diseases, and systemic diseases, in particular vasculitides, malignancies, and medications (Table 83-2). The most common primary GN worldwide is immunoglobulin A (IgA) nephropathy. It can have a highly variable course with some patients presenting with RPGN, approximately 30% to 40% of patients progressing to ESKD within 20 years and other patients only having persistent hematuria without proteinuria or changes in GFR. The incidence of ESKD in older patients with IgA nephropathy (older than age 50) is almost two times higher than that in younger patients. PIGN is considered primarily a childhood disease, but more recently its occurrence in older patients is becoming recognized. PIGN presents typically with hematuria and proteinuria 10 to 14 days after an infection. Hypocomplementemia was present in more than 70% and almost half of patients required acute dialysis. Only about 20% achieved complete recovery, half had persistent renal dysfunction, and about 30% progressed to ESKD. In summary, the epidemiology of PIGN is shifting as the population ages. Older men and patients with diabetes or malignancy are particularly at risk, and the sites of infection and causative organisms differ from the typical childhood disease.
TABLE 83-2 ■ CLASSIFICATION OF RAPIDLY PROGRESSIVE GLOMERULONEPHRITIS
Typ 1: anti- B -m diat d di a with ut pul1n nary h m 1·1·hag ( ith anti-
Typ II: i1nmun iat d di a ( i h ut anti- B r
Typ III: pauci-iנnmun ( i h A A)
pe I : נnixed pattern (with anti- B 1 and A) Typ : pauci-iזnmun (witlרout r arרti- B ) Fibז·illary and jmנדוunotactoid glom rulonephז·iti Focal clero i (rare)
IgA neplוropatlוy
M angi capillary gl merul n phriti ( p cially typ 11)
M mbran u gl m rul nephriti ( ith r, ithout anti- BM)
he1· p1·inוar
A 1 1n 1 tl di a
H patiti B and Hi topla 1no i
Infectiv end cardנti lnfluenza (?) Mycסplasma infectj נו
Po t tr pt co cal glom rulon phriti
ciat d wit1ו multi y t m di a
C 1·cin ma (lung> bladd 1·י pro ta e)
dpa tur di ea e (a11ti- BM with pulm nar hem rrhage)
Lyזnphoma
Mix d (lg /IgM) cryסiin1nu1וoglobuline1תia (h patiti )
R lap ing p 1 h nd1·iti
H noch- chonl jn pu1·pura
y t mic lupu 1·ythemat u
Small-vessel vasculitis is rare and often presents with systemic symptoms, including fatigue, malaise, loss of appetite, fever, and weight loss. Occasionally hemoptysis, rash, and joint pain are noted. Some patients do not have any systemic symptoms, but rather develop hematuria, variable degree of proteinuria, and rising creatinine. In those patients, a kidney biopsy may show pauci-immune GN.
Presentation
Patients with GN report constitutional symptoms, including decreased energy, general malaise, decreased appetite, fever and weight loss, and muscle aches. These symptoms are nonspecific and some patients attribute those symptoms to viral infections like upper respiratory infections. Any of those symptoms should prompt examination of kidney function and urine. The presence of hematuria and proteinuria should immediately raise concern for GN.
Evaluation
The evaluation of a patient with suspected glomerular disease depends primarily on the acuity of the presentation. If the patient presents with acute kidney failure and has hematuria and proteinuria, RPGN has to be suspected. In emergent and urgent presentations, the prompt consultation of nephrology may help to guide both diagnostics and therapeutic approaches.
The decision whether and when to perform a kidney biopsy depends on the overall condition and status of each individual patient. Increased frailty and decreased functional status at baseline, preexisting baseline CKD with decreased eGFR, and small echogenic kidneys on ultrasound, limited life expectancy as well as anemia, thrombocytopenia, and coagulopathy increase the risk for complications after kidney biopsy.
If the patient has stable renal function but a glomerular disease is suspected because of hematuria and proteinuria, the patient should first undergo serologic evaluation. Depending on those results a kidney biopsy can be considered. If the creatinine is rising and pre- and postrenal causes of AKI have been excluded, an urgent kidney biopsy may be considered.
Management
The diagnosis of RPGN may justify empiric immunosuppressive therapy. Even with stable kidney function, it is important to establish specific disease etiologies to guide therapy.
Pharmacologic Corticosteroids are the mainstay in the treatment of acute and chronic GN. Patients need to be monitored closely for any signs or symptoms of recurrence of disease in particular during tapering prednisone dose and after its discontinuation.
Cytotoxic therapy with agents such as cyclophosphamide is considered in patients with crescentic and proliferative GN detected on biopsy or in the setting of other potentially life-threatening complications (ie, hemoptysis, severe hemolytic anemia, and thrombocytopenia). Whether to use IV or oral Cytoxan and length of therapy depends on the specific disease, severity of initial presentation, and degree of response. Anti-CD20 antibodies (rituximab) can be considered with equivalent potency compared to Cytoxan as suggested by recent studies. Maintenance therapy is usually required to prevent disease recurrence. The goal is to maintain the patient on the lowest dose of corticosteroids and steroid-sparing agents without flare of the disease or significant complications of therapy.
Supportive medical management is critical for patients with GN, in particular management of blood pressure (hyper- and hypotension), intravascular and total body fluid volume, electrolyte abnormalities, and unwanted effects and complications of therapy.
Special Issues
Patie nt preference Treatment of GN is associated with significant side effects and complications, which need to be balanced with side effects and complications of renal failure and ESKD. Risks and benefits of all options with specific consideration of the underlying condition, renal and overall survival prognosis with and without treatment, comorbidities, functional status, and expectations should be discussed. Furthermore, treatment with immunosuppressive medications often requires frequent office visits and laboratory tests, which may significantly affect quality of life or may even not be feasible.
NEPHROTIC SYNDROME (NS)
Definition
NS is defined as the presence of nephrotic range proteinuria (> 3 g/day), hypoalbuminemia, edema, and dyslipidemia. It is caused by different diseases affecting primarily the glomerular structure, in particular podocytes and the slit diaphragm that spans between the foot processes of podocytes.
Thus, patients with NS always have glomerular and podocyte damage detected on kidney biopsy. Patients in all age groups can develop NS and the incidence and prevalence of NS in older adults is not clear. NS is likely to be missed in older adults who often have edema and low serum albumin caused by other etiologies. Minimal change disease (MCD) is most common in children and has a second peak in adults older than age 50. DN may be the most common cause of NS in adults.
Pathophysiology (Classification)
The differential diagnosis in older adult patients presenting with NS includes most diseases that are seen in younger adults. A kidney biopsy may be needed to define the cause of NS and estimate the degree of fibrosis in this age group, and thereby greatly aid in focusing the diagnostic evaluation and in planning treatment. Many clinicians recommend deferring additional laboratory or imaging procedures until the histopathologic diagnosis of NS has been made by renal biopsy.
The most common diseases identified by kidney biopsy are membranous nephropathy (MN), MCD, and amyloidosis, with approximately 60% of all cases accounted for by these three conditions. Less common causes among older patients undergoing a biopsy for diagnosis of NS are FSGS, proliferative GN, and DN. A membranoproliferative pattern of injury can be
seen in association with a monoclonal deposition disease, such as light-chain deposition disease, which is more common in older adults. DN is less commonly encountered on biopsies, largely because patients with diabetes and NS seldom undergo renal biopsy unless “atypical” features are present such as onset of NS fewer than 5 years from discovery of diabetes, rapid progression of renal impairment, or absence of proliferative retinopathy and other microvascular complications of diabetes.
MCD is encountered in approximately 15% and MN in approximately 30% to 40% of renal biopsies in older adult with isolated NS. Both MN and MCD can be primary idiopathic glomerular diseases or secondary to extra renal conditions such as neoplasia, drugs, or infection. Approximately 10% of renal biopsies in older patients believed to have primary idiopathic NS on clinical grounds will reveal amyloidosis. Amyloidosis in the older adult is most often of the primary variety.
Presentation
The most common symptom is lower extremity edema and sometimes patients also notice facial or upper extremity swelling or increased abdominal girth from ascites. Often, patients are asymptomatic and albuminuria is detected.
Occasionally, patients are found to have hyperlipidemia as the first presenting abnormality. Rarely patients with NS present with an arterial or venous thrombotic event. Renal vein thrombosis should especially raise the concern for NS.
Evaluation
New onset of lower extremity edema should always warrant a urinalysis, which is cheap, noninvasive, and can quickly guide further diagnostic interventions. If the urinalysis shows albuminuria, renal function, serum albumin, and lipid levels should be determined. At the point of diagnosing a NS in a patient, a nephrology consultation is indicated for work-up and therapeutic recommendations. A kidney ultrasound is used to evaluate for kidney size, which can be increased in patients with NS due to diabetic and HIV nephropathy, amyloidosis, and infiltrative diseases that can also cause MN. A kidney biopsy should be considered in all patients with new onset of NS. Because some etiologies of NS, including MCD and MN, are paraneoplastic syndromes, a malignancy work-up should be considered.
Management
Pharmacologic Treatment of the underlying etiology of NS is usually divided into immunosuppressive and nonimmunosuppressive therapies.
Immunosuppressive therapy usually includes induction and maintenance regimen. Which regimen is recommended depends on the specific disease etiology, risk for progression, presence of comorbidities, and risk of complications. Disease-specific immunosuppressive regimens usually include pulse-dose corticosteroids, followed by oral steroids very similar to patients with GN as described earlier. The rate of steroid taper often is guided by response to therapy. Additional therapeutic strategies may include B-cell depletion using anti-CD20 antibodies, calcineurin inhibitors, and mycophenolate mofetil.
Non-immunosuppressive therapies target mainly proteinuria and edema. The mainstay of interventions to lower proteinuria is blockade of the RAS with either ACEI or ARB, which lower intraglomerular pressure and also exhibit direct beneficial effects on podocytes. Peripheral edema and ascites require sodium restriction (< 2 g/day) and loop diuretics, it should be reversed slowly to avoid hypovolemia and AKI. Hyperlipidemia usually resolves with resolution of NS. In patients who experience a thrombotic event due to hypercoagulopathy in NS, anticoagulation may have to be considered.
Prevention
Management of diabetes attenuates the likelihood of developing DN and subsequent NS.
Special Issues
Patie nt preference NS is a potentially life-threatening disease due to cardiovascular and infectious complications. In addition, NS can lead to AKI, potentially requiring dialysis support, but is potentially reversible. On the other hand, NS can lead to CKD and progression to ESKD even with treatment. Therefore, patients need to be educated about the overall prognosis and a discussion of goals of care should be initiated early during the course.
DIABETIC NEPHROPATHY
Definition
Diabetic kidney disease or DN is a progressive microvascular complication due to long-standing diabetes mellitus that can lead to ESKD.
Epidemiology
About 30% of patients with diabetes mellitus develop DN during their lifetime. In the general adult population in the United States, DN is the most common cause of NS and ESKD requiring dialysis. DN is more common in patients with other microvascular complications, in patients with neuropathy and retinopathy, and in patients with a family history of kidney disease, including DN.
Pathophysiology (Classification)
DN is a microvascular complication of patients with diabetes mellitus. DN has several distinct phases and complex molecular mechanisms are involved in the development of the disease and its outcomes. A significant proportion of patients exhibit accelerated loss of kidney function once eGFR is below 45 mL/min/1.73 m2. Hyperglycemia appears to contribute significantly to
development of DN because DN is a complication of both types 1 and 2 diabetes.
Presentation
Patients are usually diagnosed with DN because of routine screening tests for renal function and albuminuria, as is standard of care. Patients with earlier stages of DN can be found to hyperfiltrate with eGFR above 120 mL/min/1.73 m2, and/or albuminuria, often below the detection limit of standard urinalysis tests requiring testing for “microalbuminuria.” At later
stages, patients may present with “macroalbuminuria” and NS. In all patients with DM and eGFR below 60 mL/min/1.73 m2, DN is the likely underlying etiology. Absence of or less than nephrotic range proteinuria does not exclude DN.
Evaluation
All patients with DM should be regularly evaluated for development of albuminuria and changes in eGFR. Even though a kidney biopsy may not be necessary in many patients with suspected DN, it has to be considered in the presence of any concern for other etiologies. Patients have to be evaluated
for electrolyte abnormalities because type IV renal tubular acidosis (RTA) may lead to hyperkalemia, and serum albumin and lipid levels need to be determined to diagnose NS and its associated complications.
Management
The main goals in patients with DN are optimization of blood glucose levels, blood pressure, and use of kidney-protective medications. In addition, complications of CKD as well as timely preparation for dialysis have to be managed.
Pharmacologic The development of agents to protect kidney function among those with DN has been a dramatic success in the prevention of ESKD. Blockade of the RAS and tight glycemic control have been shown to improve outcomes in patients with DN. The medications used to achieve these goals should be tailored specifically toward an individual patient’s comorbidities. ACEI, ARBs, SGLT2 inhibitors, and GLP-1 receptor agonists delay progression of CKD to ESKD and should be started in all patients with diabetes mellitus and any stage of CKD and/or hypertension if tolerated. If sodium retention and edema are present, diuretics may be used. The blood pressure goal for patient with diabetes and CKD is generally agreed by various organizations to be kept below 130/80 mm Hg.
Special Issues
Patie nt preference Patients with diabetes who require dialysis have a higher complication rate and mortality.
Comorbiditie s Many patients with DN suffer from other complications of DM, including neuropathy and retinopathy, hyperlipidemia, macrovascular disease (coronary artery disease [CAD]), and obesity. Management of these comorbidities can potentially delay progression of CKD to ESKD. Diabetic patients with ESKD receiving dialysis are also at increased risk for cardiovascular and infectious complications.
INTERSTITIAL NEPHRITIS
Definition
Interstitial nephritis is characterized by inflammation of the tubulointerstitial area of the kidney causing tubular dysfunction and thereby proteinuria and decrease in renal function.
Epidemiology
The number of patients with acute interstitial nephritis (AIN) is very difficult to assess with confidence, as many episodes are likely undiagnosed and/or self-limited. In patients with AKI, AIN is an important cause. In studies, 80% of older patients showed partial or complete recovery within 6 months.
Pathophysiology (Classification)
The most common cause of AIN is secondary to drug therapy, but autoimmune diseases or other systemic diseases, infections, and tubulointerstitial nephritis with uveitis (TINU) syndrome can also cause AIN. In older adults most cases of AIN are due to drugs. Even though virtually any drug can cause AIN, the most common culprits are antibiotics (especially penicillins, cephalosporins, and ciprofloxacin), proton pump inhibitors, NSAIDs, and diuretics.
Presentation
AIN presents with nonspecific signs and symptoms, including nausea, vomiting, and malaise, unless the patient has systemic signs of an allergic drug reaction such as fever or rash. Hematuria is rare and proteinuria is usually not significant, with the exception of NSAID-induced AIN, which can be accompanied by NS. Atypical presentations with minimal symptomatology require a higher index of suspicion seen in older adults— NSAIDs and proton pump inhibitors are frequent offenders.
Evaluation
In addition to the physical examination, serum creatinine levels have to be monitored. In the urine, sediment white cells, red cells, and white cell casts are typically seen. Eosinophilia may be present, but lacks sensitivity and specificity to help in the diagnosis of AIN. Glucosuria, high fractional excretion of sodium (> 1%), and RTA may be present indicating renal tubular cell damage. A kidney biopsy may be considered in situations where the diagnosis of AIN would change management.
Management
The most important intervention when drug-induced AIN is suspected is to stop the offending agent. Consider medications that have been recently initiated, but in the absence of recent exposure to a new drug, medications to
consider among those that have been used for a longer periods include proton pump inhibitors.
Drugs Corticosteroid therapy has been shown to improve outcomes if initiated within 14 days after first symptoms. Even though the recommended dose is equivalent to treatment of GN or vasculitis, lower doses used for allergic dermatitis should have the same effect in the kidney.
Prevention
To prevent AIN, medications with known higher-than-average risk of AIN— in particular proton pump inhibitors—should only be prescribed if and for as long as needed.
Special Issues
Patie nt preference To limit complications, the use of corticosteroids in AIN treatment regimens should be limited in length and to the lowest effective dose, even if the efficiency of lower doses remains uncertain.
Comorbidity AIN can present with systemic symptoms, in particular skin rash and peripheral eosinophilia. In rare cases, involvement of the airways in the allergic reaction can cause bronchospasm.
RENOVASCULAR DISEASE
Definition
Renovascular disease is anatomic narrowing of a main renal artery (renal artery stenosis) or its branches and can cause secondary hypertension and progressive renal insufficiency. Renal artery stenosis may be asymptomatic or cause hypertension and ischemic nephropathy. Renovascular hypertension is defined as the elevation of blood pressure secondary to compromised arterial circulation of the renal parenchyma causing chronic renal hypoperfusion. Ischemic nephropathy can be defined as a reduction in kidney function resulting from a partial or complete luminal obstruction of the preglomerular renal arteries of any caliber.
Epidemiology
Renovascular hypertension is a common cause of potentially remediable secondary hypertension in older patients with an estimated prevalence of 2%
to 3% in the general hypertensive population and perhaps as much as 40% of those with refractory hypertension.
Pathophysiology
Atherosclerosis accounts for almost 90% of cases in the geriatric population, with fibrous dysplasia comprising the rest, but the spectrum of diseases that can cause renovascular disease includes many different diseases (Table 83- 3). Atheroembolic renal disease falls into the category of renal insufficiency induced by preglomerular ischemia. This entity has been usually described in patients with clinical evidence of atheromatous occlusive disease following invasive intra-aortic diagnostic or therapeutic procedures, although spontaneous embolic episodes have been reported.
TABLE 83-3 ■ CATEGORIES OF RENAL ARTERY DISEASE
Presentation
Findings that might prompt the clinicians to consider renal artery stenosis are onset of hypertension after the age of 50, notably diastolic hypertension, accelerated or difficult-to-control hypertension, coexisting diffuse atherosclerotic vascular disease and decreased GFR, acute or subacute increase in serum creatinine levels after initiation of therapy with ACEIs or ARBs, recurrent pulmonary edema, grades III to IV hypertensive retinopathy, abdominal or flank bruit, hypokalemia in the absence of diuretic use, erythrocytosis, microangiopathic hemolytic anemia, and hyperuricemia.
Acute or subacute fall in GFR (rise in serum creatinine) can be precipitated by treatment of hypertension with ACEIs or ARBs during a period of days to weeks after initiating therapy. It is associated with hypoperfusion of the kidneys caused by inhibition of angiotensin II– dependent autoregulatory pathways in the glomerulus. Even so many patients with renovascular disease tolerate these agents and do not require additional noninvasive studies. The role of direct-renin inhibitors and renal artery stenosis has not been defined. Patients with suspected or documented renovascular hypertension and poorly controlled hypertension may present with progressive azotemia or with recurrent pulmonary edema (23% prevalence in some series).
Azotemia in an older patient that cannot be explained by other renal diseases—in particular in the setting of worsening renal failure, bland urinary sediment, proteinuria (< 1 g/day), hypertension, and evidence of peripheral vascular disease—should prompt an evaluation for renovascular disease. It is not clear what proportion of the dialysis population have renovascular disease as the underlying cause of ESKD.
Evaluation
The diagnosis of renovascular hypertension is based on the demonstration of renal artery stenosis (usually by angiography or CT angiography), pathophysiologic significance of the stenotic lesion, and correction of the hypertension by an intervention that relieves the stenosis.
Duplex ultrasound scanning also allows scanning of the renal arteries and measurement of kidney size and is not affected by medications or the level of GFR. The reported sensitivity and specificity values are in the low- to mid- 90% range. The disadvantage of this test is that it is technically demanding and has a steep learning curve for each center that performs this test.
Magnetic resonance angiography (MRA) has been effective in screening for the presence of renal artery stenosis with the advantage of having less exposure to contrast media and less invasive than the arteriogram. However, patients with CKD exposed to gadolinium and developing nephrogenic systemic fibrosis (NSF) have been described in case series. Therefore, MRA for defining renal artery stenosis in patients with CKD is infrequent and carefully considered. Another highly accurate noninvasive study for screening for renal artery stenosis is the spiral (helical) CT scan with CT angiography. However, there is a risk of contrast nephropathy with the spiral CT scan in patients with CKD.
ACEI radionuclide scintirenography using technetium-99m diethylenetriamine penta-acetic acid (99mTc-DTPA) has shown high sensitivity and specificity for renovascular hypertension. Two limitations should be kept in mind: (1) these tests have not been evaluated in patients
with azotemia, and (2) while a positive test predicts an improvement in blood pressure, it is not known whether it also predicts an improvement in renal function.
The gold standard for diagnosing renal artery stenosis is renal angiography. Intra-arterial digital subtraction angiography (IA-DSA) or a CO2 angiogram also provides excellent anatomic detail and requires less
contrast than conventional angiography. The technique of intravenous DSA, although less invasive, does not provide comparable resolution to the aforementioned tests because of the high degree of bowel gas and motility artifacts, and usually requires a significantly larger amount of nephrotoxic contrast material.
The second step in making the diagnosis of renovascular hypertension is to determine the pathophysiologic significance of the lesion. Some of the diagnostic tests already mentioned are also used to assess this issue.
Selective renal vein renin measurement is the gold standard for establishing the functional nature of the stenotic lesion and helps predict the blood pressure response to revascularization. In general, a renal vein renin ratio of greater than or equal to 1.5 between the two renal veins is predictive of a beneficial blood pressure response following surgery or angioplasty, but failure to lateralize does not predict a negative response. Overall, the blood pressure response to revascularization cannot be determined with confidence by using renal vein renin measurements.
Management
Renovascular disease can remain stable or worsen over time. Of patients with renal artery stenosis, up to 50% can expect their stenosis to worsen, with reports of up to 5% per year. In general, the rate of progression of renal insufficiency and the likelihood of deterioration of renal function correlates with the extent of stenosis at the time of diagnosis. Older adult patients who develop ESKD secondary to progressive atherosclerotic renal artery obstruction have poor survival.
Medical therapy is universally accepted first-line treatment for renal artery stenosis. The approach to patients with RAS should address hypertension, as well as anti-platelet, hyperlipidemia, and hyperglycemia therapies. ACE inhibitors and ARBs have improved the likelihood of blood pressure control among patients with RAS. Therapy must be further individualized and based on the general status of the patient, the presence of any concomitant disease, and the local surgical or angiographic experience of the center.
In the presence of unilateral renal artery stenosis and at least moderately decreased GFR, the latter is usually not improved by intervention. In the Cardiovascular Outcomes in Renal Atherosclerotic Lesions (CORAL) trial, which included 947 patients, there was no benefit to revascularization versus medical management. Those patients assigned to revascularization in CORAL had similar primary composite outcomes after a median follow-up of 3.6 years. The meta-analysis of the trials comparing revascularization to medical management also demonstrates no benefit.
In bilateral stenosis, there are a number of possible clinical presentations and the approaches are informed by scarce data given that the revascularization trials for RAS were largely unilateral and stable disease.
The first clinical presentation is bilateral occlusion of the renal arteries. This situation does not necessarily imply irreversible damage because the viability of the kidneys may be maintained by a collateral blood supply. This is particularly true in patients who have a gradual onset of arterial occlusion. Clinical findings suggesting parenchymal salvageability include the following: angiographic demonstration of retrograde filling of the distal renal arterial system by collateral vessels; renal biopsy showing preserved glomerular architecture; kidney size greater than 9 cm by ultrasound; and function of the involved kidney on renal scintigraphy. Some centers perform kidney biopsies in surgical candidates if their serum creatinine is higher than
4 mg/dL. In patients with serum creatinine less than 3 mg/dL, improved renal function (defined as a reduction in serum creatinine of > 20% from the baseline value) can be expected post-revascularization in nearly half of patients undergoing this procedure.
The second scenario is bilateral stenosis without total occlusion or stenosis in a solitary kidney. Improvement in renal function is frequently seen after reconstructive surgery in 75% to 89% of these patients. Unlike cases with total renal occlusion, revascularization to preserve renal function is not worthwhile in patients with severe renal insufficiency (serum creatinine > 4 mg/dL) because they usually have advanced underlying renal parenchymal disease (nephrosclerosis and/or atheroembolic disease), which is not improved by revascularization. In older patients, atherosclerosis of large vascular structures poses additional challenges to bypass procedures.
Considering the significant risks of progressive renal occlusive disease and renal failure that are associated with medical management of this condition, surgical options for treatment may be considered.
Initially percutaneous transluminal renal angioplasty (PTRA) had a limited role in older patients because of the concomitant presence of aortic atherosclerotic disease, making any endovascular procedure hazardous and technically difficult. Restenosis following dilatation of atheromatous lesions was quite common and a significant number of older patients present with ostial lesions, which are not amenable to PTRA. PTRA with the use of endovascular stenting devices have come into vogue with improved outcome of the revascularization of these ostial lesions. Surgical intervention is presently recommended for more complicated lesions, or angioplasty failures. One advantage of angioplasty over surgery is that it can be undertaken in patients who have prohibitively high surgical risks that are related to systemic atherosclerosis.
Prevention
Renovascular disease can be prevented by the same measures that prevent atherosclerosis, in particular smoking cessation, exercise, and control of hyperlipidemia.
Special Issues
Patie nt preference Even though interventions have not provided benefits in outcome, in specific patients an intervention may be considered if it
improves quality of life to reduce side effects of antihypertensive medications.
END-STAGE KIDNEY DISEASE (ESKD)
Definition
End-stage kidney disease (ESKD), or kidney failure, has been defined as having kidney function less than 15 mL/min/1.73 m2. Causes of ESKD are usually progression of a CKD or by AKI. It is associated with the inability to excrete waste products, control serum electrolytes, handle the daily dietary
and metabolic acid load, and maintain fluid balance. In addition, kidney failure causes inadequate production of erythropoietin, deranged calcium and phosphorous metabolism, high blood pressure, and accelerated progression of cardiovascular disease.
Most etiologies of CKD demonstrate a progression to kidney failure and variability in rate and trajectory of progression among individuals with kidney disease. The rates of decline in kidney function vary by underlying nephropathy, by severity of hypertension and albuminuria, by modifying factors, and between individuals. Historically, the rate of decline could be estimated as 7 to 10 mL/min/year in those with untreated chronic nephropathies such as DN. However, chronic nephropathies have similar effects on electrolyte homeostasis, causes of progressive decline in function, and manifestations of kidney failure so that classification by severity permits a better understanding of underlying routes to progression, symptoms, and hopefully, treatments of CKD.
Epidemiology
The prevalence of kidney failure is substantial worldwide; approximately 10% to 13% of the adult population in North America, Europe, and Asia are estimated to have some form of CKD based on meta-analysis and meta- regression. The majority of those with CKD stages 1 and 2 may not be aware of their kidney disease. The number of patients at risk for developing kidney disease will increase with the increasing prevalence of diabetes and the aging of the population.
The prevalence of older people with kidney failure has been driven by an increased incidence of kidney failure, greater access to renal replacement therapy (RRT), and improved survival of both dialysis patients and kidney
transplant recipients. A substantial proportion of patients in the United States receiving hemodialysis are aged 65 and older (Figure 83-2). This has been a worldwide phenomenon with a marked increase in the rate of incident dialysis patients older than 75 years over the past two decades. Similar rates of increase of kidney failure in older adults have been noted in Europe and Japan with a marked increase in octogenarian hemodialysis patients in Japan. Once referred for kidney failure, older patients have been surviving longer with RRT as dialysis treatment and kidney transplant outcomes have improved. As a result of improvements in technology and greater access to dialysis, the increased prevalence of older adults undergoing RRT generally mirrors the aging trend of the general population.
FIGURE 83-2. Distribution of treatment modality among prevalent ESRD patients by age. (Reproduced with permission from United States Renal Data System. 2020 USRDS Annual Data Report: Epidemiology of kidney disease in the United States. National Institutes of Health, National Institute of Diabetes and Digestive and Kidney Diseases. Bethesda, MD, 2020.)
Referral of Patients to Nephrology
The Kidney Disease: Improving Global Outcomes (K/DIGO) guidelines recommend that patients with progressive CKD should be managed in a multidisciplinary setting. These guidelines advise involvement of a nephrologist under the following circumstances: AKI, GFR less than 30 mL/min/1.73 m2, consistent findings of significant albuminuria, progression
of CKD, urinary red cell casts, CKD and refractory hypertension, persistent abnormalities of potassium, recurrent or extensive nephrolithiasis, and hereditary kidney disease. The aim is to provide time for the nephrology
team to provide an individualized care plan consistent with the goals of the older patient with CKD. Older patients with worsening CKD have a wide spectrum of choices for the treatment of kidney failure. Once therapy has been selected, some of the treatments require lead time prior to the development of an indication for dialysis, hence the need for early nephrology involvement.
CKD and Management of Chronic Health Conditions
Since liberalizing access to dialysis and kidney transplantation, there has been a steady shift for this population to include not only older patients, but also to those with multiple chronic conditions. Given the complexity of managing older patients with a high burden of comorbid disease, the primacy care/geriatrics team has the opportunity to play a crucial role in the care of these patients. Patients with advanced CKD often have been shown to have gaps in their primary health care, diabetes care, and cardiovascular disease care. There are also gaps in nutrition, fall prevention, management of depression, and frailty among older patients undergoing dialysis.
Indications for Initiation of Dialysis
The most common symptoms that present before initiation of maintenance dialysis in older patients tend to be anorexia, weight loss, fatigue, nausea, and vomiting. The recognition of uremia in the older patient, however, may prove difficult. Behavioral changes, unexplained impaired cognition, “adult failure to thrive,” unexplained worsening of congestive heart failure, or a change in sense of well-being may be a manifestation of uremia in the geriatric patient.
While timely initiation of RRT avoids the need for urgent dialysis, clinical trials of earlier initiation of dialysis (GFR 10–15 mL/min/1.73 m2) have shown no significant benefit compared to later initiation (GFR 7 mL/min/1.73 m2). A strong correlation between “baseline” serum albumin just prior to initiation of dialysis and patient survival has been demonstrated. Although hypoalbuminemia itself does not necessarily indicate protein- energy malnutrition, it is believed to be a major contributing factor. Analysis of the Modification of Diet in Renal Disease (MDRD) study showed that patients tend to adapt to their declining GFR and associated uremic
symptoms by reducing their protein intake. Nevertheless, approximately 60%
of American ESKD patients experience nausea and vomiting at the time dialysis is initiated.
Contraindications to Renal Replacement Therapy
It may be reasonable not to start or to stop dialysis for older patients with a very poor prognosis or who cannot be dialyzed safely. For the practitioner, there are few absolute medical contraindications to RRT. Some propose that advanced dementia, metastatic cancer, heart failure with marked hypotension, and advanced liver diseases are reasons for withholding RRT. However, progressive dementia can be confused with uremia-induced delirium in a patient with advanced kidney dysfunction, and a “trial” of dialysis may be justified. It may take as long as 3 to 4 weeks to clear uremic symptoms with dialysis. The patient’s family should be aware that if the patient’s mental status fails to improve, RRT may be inappropriate. Similarly, providing dialysis for a patient with metastatic cancer or end-stage liver disease may allow the patient to get their affairs in order and spend some important time with friends and family. Cognitive and behavioral contraindications may play an even larger role in older patients than medical contraindications. Dialysis units are communities where a patient with inappropriate, unsafe, or violent behavior may adversely affect care provided to others at that unit.
Health-Related Quality of Life (HRQOL)
Maintaining health-related quality of life (HRQOL) is very meaningful for older patients with chronic illness. Studies of older patients undergoing dialysis have shown markedly lower functional status compared to older community-dwelling adults. However, hemodialysis has improved since these early studies and there have been advances in technology, treatment of comorbidities such as anemia and hyperparathyroidism, and quality improvement initiatives that have improved the HRQOL of patients on dialysis. As shown in Figure 83-3, there is evidence of both decrements in physical and mental well-being associated with dialysis compared to the general population across regions. While there is a marked decrement in physical well-being with older age, there are similar scores for physical well-being across age groups. These findings may be informative to older patients and health care providers and they also underline the need to improve HRQOL among all patients undergoing dialysis. Interventions aimed at preserving residual kidney function, monitoring HRQOL, treatment of
anemia, engaging the patient in physical therapy and rehabilitation, applying palliative care principles, and perhaps more frequent and longer hemodialysis treatments may preserve HRQOL among older patients undergoing hemodialysis.
FIGURE 83-3. Kidney disease quality of life (KDQOL) physical component summary (PCS) and mental component summary (MCS) scores by age categories across Dialysis Outcomes and Practice Patterns Study (DOPPS) regions versus US population norm. Cross-section of participants in DOPPS III (2005–2007; n = 8161). Europe includes United Kingdom, France, Germany, Italy, Spain, Belgium, and Sweden; North America includes United States and Canada; ANZ represents Australia and New Zealand. (Adapted from Canaud B, Tong L, Tentori F, et al. Clinical practices and outcomes in elderly hemodialysis patients: results from the Dialysis Outcomes and Practice Patterns Study (DOPPS). Clin J Am Soc Nephrol.
2011;6[7]:1651–1662.)
CHOICE OF RENAL REPLACEMENT THERAPY
When faced with kidney failure, the older patient has a number of choices to make regarding therapy consistent with their overall level of well-being and goals of care. The most common forms of kidney replacement in the United States are three-times weekly outpatient hemodialysis, peritoneal dialysis (PD), and renal transplantation. There has also been a proliferation of home therapies and therapies tailored to older patients such as nursing home–based dialysis units. The older patient may also choose conservative management and thereby avoid dialysis or opt for palliative care. The patients should
have time to develop a relationship with the nephrologist and team in order to have discussions of goals for care and how RRT may be tailored to meet those goals.
Hemodialysis
Hemodialysis removes excess fluids and solutes from the blood in order to maintain euvolemia and homeostasis. The conventional hemodialysis schedule requires three treatments per week for approximately 4 hours per treatment. In order to perform hemodialysis, the patient must have an access placed to circulate the blood through the hemodialysis filter. The three options for hemodialysis access include arteriovenous (AV) fistula, AV graft, and temporary hemodialysis catheters. Permanent hemodialysis access requires minor surgery in the arm or leg. The AV fistula, which creates a connection between the native artery and the AV fistula vein, matures and thickens to handle the higher blood flow rates and permit the use of a needle for access after approximately 3 months. The use of an AV fistula has been associated with better access survival, fewer infections, fewer hospitalizations, and longer patient survival. The AV graft uses a synthetic bridge between the artery and the vein. The graft has been used in a broader population than the AV fistula but has the major limitations of shorter access survival as well as more infections and hospitalizations. For patients who have been referred late or who have acute kidney failure, hemodialysis is performed using a dialysis catheter. This is typically a large-bore catheter placed in a major vessel such as the superior vena cava. In the outpatient setting, these catheters are usually tunneled under the skin and treated with sterile precautions by the dialysis unit when accessing the catheters for blood. They have been associated with high rates of bacteremia, catheter malfunction, venous stenosis, and increased costs. Despite the increased rate of complications with grafts and catheters, the older adult should participate in shared decision-making to select an access as the delayed maturation of a fistula may prove unsatisfactory to someone with a limited life expectancy.
By far, in-center hemodialysis remains the predominant form of dialysis in the United States. While most patients maintain a certain quality of life on hemodialysis, the drawbacks include pain, fatigue, depression, loss of freedom, dietary and fluid restrictions, and concern about burden to caregivers.
Peritoneal Dialysis
PPD permits older patients to maintain more control over their schedule and play a larger role in the management of their kidney failure. PD uses the peritoneal membrane as a dialysis membrane by drawing excess fluid and toxins from the blood and into the peritoneal cavity where the fluid will then be drained through a plastic catheter that has been placed into the abdomen. Depending on the dialysis prescription, the peritoneal cavity is filled with fluid and drained a number of times over the course of the day or night. The advantages to using PD include a less-restrictive diet and avoiding the need to travel to a dialysis unit for treatment. The disadvantages of PD include back pain, peritonitis, hyperglycemia, obesity, and hernia formation. PD has been successfully used in older patients, but its use in patients with poor functional status depends on a caregiver willing to commit to performing the daily therapy. PD can also be performed safely and effectively in a long-term care facility by staff with specialized training and with assistance of medical staff or caregiver. The preferred mode of PD in this setting is continuous cyclic PD or nocturnal PD, which requires less nursing time, allows patients to be more fully integrated into social activities, and allows interruption-free intensive rehabilitation.
Kidney Transplantation
Kidney transplantation is a potential therapy for patients with ESKD, and the percentage of patients awaiting kidney transplantation has increased among older adults compared to all other age groups. As shown in Figure 83-4, among those older than 75 established on a kidney-transplant waitlist, fewer than 40% receive either a living or deceased kidney transplant in the subsequent 5 years. Overall life expectancy increases for older patients post kidney transplantation compared to those who are on dialysis and on the waitlist for a kidney transplant. Older patients with comorbidities such as diabetes and hypertension also have a survival advantage with kidney transplantation compared to those who remain on the waitlist on dialysis.
FIGURE 83-4. Yearly distribution of living donor transplantation, deceased donor transplantation, death, and removal from the waitlist after initial waitlisting for those greater than 75 years of age, 2009 to 2013. (Reproduced with permission from United States Renal Data System. 2020 USRDS Annual Data Report: Epidemiology of kidney disease in the United States. National Institutes of Health, National Institute of Diabetes and Digestive and Kidney Diseases, Bethesda, MD, 2020.)
Health-related quality of life also improves for older patients after kidney transplant. Transplant patients older than age 65 have been shown to have significantly higher scores in physical functioning, general health perception, and mental health compared to hemodialysis patients. The older individual is not the only one to benefit from kidney transplant—health care systems also benefit, as the cost of care decreases.
Older individuals may have less rejection after kidney transplantation secondary to changes in both adaptive and innate immunity. However, as both T- and B-cell immunities are dampened with aging, transplantation is associated with a higher risk of infection in older adults.
For the older patient seeking a kidney transplant, screening for cognitive function and physical performance tests to better assess frailty should be part of the evaluation process. Physical performance tests can help screen older patients for the risk of falls, hospitalizations, and death posttransplantation.
The degree to which comorbidity influences graft and patient outcomes is significantly less in recipients of living donor kidneys compared to recipients of deceased donor kidneys. A living donor kidney may offset some of the risk of increased comorbidity in the older recipient and highlights living donor kidney transplantation as an opportunity for older patients with kidney failure to receive optimal treatment.
Time-Limited Trial of Dialysis
One of the options for older patients with ESKD would be to have a trial of hemodialysis to assess the extent to which the treatment is consistent with the life goals of the patient. The approach of a time-limited trial is reasonable when the patient, family, or physician is unsure about the prognosis or the impact that dialysis will have upon the patient’s quality of life. If a trial of dialysis is to be conducted, it is important to predetermine a time period (usually 4–6 weeks) and to inform all the members of the dialysis team. Such measures will ease the appropriate withdrawal from dialysis and may also help to resolve conflicts when a consensus cannot be reached for the best approach to managing an older patient with ESKD.
Conservative Management (Nondialytic Treatment) and Palliative Care The approach to managing older adults who chose to forgo RRT is twofold. Comprehensive conservative therapy or nondialytic treatment is focused on maximal medical management of CKD and the metabolic complications of advanced CKD. While this aspect of care is focused on slowing the progression of kidney disease, palliative care or supportive care treats
symptoms and addresses the psychological, spiritual, and social needs of the patient. In very old patients and in frail older patients, there may be similar outcomes in terms of mortality and quality of life between those getting dialysis and conservative management.
Conservative management should aim to maintain quality of life while keeping in mind the shared decision to avoid dialysis therapy. In addition to continuing to manage fluid status, anemia, electrolytes, and bone metabolism, symptoms should be addressed by the geriatric, nephrology, and palliative care teams. Between both groups of patients undergoing dialysis and receiving palliative care, symptoms should be treated since pain, dry itchy skin, poor sleep, and fatigue impact mood and quality of life. A nutritional approach to avoiding dialysis therapy has shown that the dietary intervention can provide an additional 6 to 12 months off dialysis. This suggests that nutrition and palliative approaches play an important role in managing patients who elect to avoid dialysis.
MANAGEMENT OF END-STAGE RENAL DISEASE COMPLICATIONS
Anemia
Anemia is a common problem with CKD and kidney failure. The cause of anemia may be multifactorial and due to both CKD and other chronic conditions. In addition to CKD, one should consider iron deficiency due to loss of blood from the gastrointestinal (GI) tract or hematologic etiologies. Among older patients with CKD, the fall in hematocrit correlates roughly with the severity of the renal disease, although individual variation is considerable. In general, anemia develops when the GFR is around 35 mL/min. The bone marrow is hypoproliferative while peripheral blood (red cell) indices are normal unless there is superimposed deficiency of iron or folic acid. Thus, the anemia seen with kidney failure is typically a normochromic, normocytic anemia. The lack of erythropoietin production is the primary cause of anemia with kidney failure. Erythropoietin levels are inadequate with CKD, thus depriving the bone marrow of the stimulus necessary for production of red blood cells. The isolation of human erythropoietin and subsequent production of recombinant erythropoietin has been a major advance in the care of those with CKD. Since 1989, synthetic erythropoietin has been available and effectively treats the anemia of chronic renal failure. Darbepoetin permits less frequent dosing regimens and other agents are under investigation. Hypoxia-inducible factor prolyl hydrorxylase inhibitors (HIF PHI) are a class of oral agents that stabilize HIF and promote red blood cell production. A growing number studies have demonstrated noninferiority compared to erythropoietin-based therapies for correction of hemoglobin and mixed results when examining the impact of HIF PHI on cardiovascular events.
Correction of anemia has been associated with improved quality of life, reduced the need for transfusions, improved cognitive performance, and decreased left ventricular hypertrophy. There are adverse effects of using erythropoietin in the CKD patient, including iron deficiency (the stimulation of red blood cell production outstrips iron stores), hypertension, cardiovascular events, and vascular thrombosis.
The optimal hemoglobin target for patients with anemia caused by CKD remains somewhat unclear. The current conservative target is a hemoglobin level near 11.5 mg/dL. In addition to close attention to erythropoietin dosing and hemoglobin levels, adequate anemia management requires routine analysis and treatment of iron deficiency. Iron levels tend to drop in patients on erythropoietin therapy because of increased iron utilization. Iron-deficient ESKD patients usually receive intravenous replacement iron at the end of
their dialysis treatments. Because they absorb oral iron poorly, ESKD patients have “functional” iron deficiency when their ferritin falls below 500 ng/mL or their iron saturation is less than 30%.
Cardiovascular Disease
Many more patients have CKD compared to the numbers receiving RRT. The disparity between the number receiving RRT and the number with CKD is explained in part by the high mortality from cardiovascular disease prior to needing dialysis in the CKD population. While on dialysis, the risk of death from cardiovascular disease remains 10 to 100 times higher than the risk of a person from the general population. While traditional risk factors account for only a portion of the increased risk associated with kidney disease, the etiology of the increased cardiovascular risk is unclear. There are a number of potential factors that represent classical risk factors shared between atherosclerosis and kidney disease such as age, hypercholesterolemia, hypertension, diabetes mellitus, smoking, and obesity. In addition, there are a number of factors specific to kidney failure such as anemia, hyperhomocysteinemia, hypervolemia, and hyperparathyroidism. These factors can act to promote both cardiomyopathy and ischemic heart disease. The current treatment recommendations focus on optimizing management of known cardiovascular risk factors and on recognizing harbingers of active cardiovascular disease. While guidelines recommend using statins among patients with CKD, a reasonable course would be to individualize cardiovascular disease care for older patients with ESKD. Those caring for patients with CKD should consider cardiovascular disease management as an important part of their care.
Calcium, Phosphorous, Hyperparathyroidism, and Bone Disorders Kidney failure is associated with abnormalities of divalent ions (Ca2+, P)
and of the hormones that regulate the concentration of these minerals in body
fluids. One of the earliest detectable abnormalities in kidney failure is a rise in PTH, which can occur at a GFR of 40 mL/min. Secondary hyperparathyroidism is a major cause of bone disease in patients with kidney failure. Two inhibitors of PTH are ionized calcium (the active form of calcium) and 1,25-(OH)2D3, which has an independent inhibitory effect on
PTH release. No difference has been found in hyperphosphatemia or severity of renal osteodystrophy in older dialysis patients as compared with younger
dialysis patients, and there is no correlation between plasma 1,25- dihydroxyvitamin D3 levels and age in patients with kidney failure. Older female dialysis patients have significantly lower bone mineral content and
bone width when compared with younger dialysis patients matched for
duration of dialysis. There is a higher incidence of pathologic fracture, vascular or metastatic calcification, and bone pain in older patients on dialysis than in younger patients. The overall result of these disturbances is that patients with ESKD may develop a complex form of bone disease— renal osteodystrophy. Furthermore, there has been a strong association of disordered bone and mineral metabolism and risk of cardiovascular disease. This association between calcium and phosphate metabolism has been most striking when images of the heart demonstrating extensive calcification have been related to levels of both calcium and phosphate.
The management of bone disorders among older patients with ESKD is complicated by other bone diseases found among this age group. For example, osteoporosis and renal osteodystrophy may both present in the older patient with ESKD. Treatment of renal osteodystrophy and secondary hyperparathyroidism in the geriatric renal patient is similar to the younger patients. Hyperphosphatemia is treated with a low-phosphate diet and with various phosphate binders. Refractory hyperparathyroidism can be treated with parathyroidectomy or with a trial of a calcimimetic. While the agents used to regulate calcium and phosphate levels remain controversial, the goals of normalizing calcium and phosphate levels and improving bone health are widely accepted as important in the management of CKD patients.
Cognitive Impairment
Cognitive impairment is prevalent among patients with ESKD, especially as patients age. The potential causes of cognitive impairment in patients with CKD might be explained by two hypotheses. The vascular hypothesis states that the brain and the kidneys are low-resistance end organs exposed to highly pulsatile blood flow which is exacerbated by CKD and thus are susceptible to microvascular damage that contributes to the cognitive impairment in CKD patients. The neurodegenerative hypothesis states that the accumulation of uremic toxins in ESKD patients impairs the functions of the central nervous system in this population.
Cognitive impairment in ESKD is associated with decreased quality of life, suboptimal medical care, decreased adherence to medication regimens
and dietary modifications, more frequent and prolonged hospitalizations, and decreased decision-making capacity, especially in making very important health-related decisions. This underlines the need for periodic screening to accurately identify ESKD patients with cognitive impairment in order to improve their clinical care. Recognizing and addressing cognitive impairment in ESKD may lead to improved adherence to treatment, increased ability to make informed medical decisions, decreased rate and length of hospitalization, decreased morbidity and mortality, and improved health- related quality of life.
Malnutrition
Nutrition plays an important role in the health and well-being of patients with kidney failure. Poor nutrition in older patients can be linked to dietary restrictions for ESKD patients, compounded with challenges of access to foods, difficulty with food preparation, medication side effects, and decrements in appetite. Patients with kidney failure benefit from close monitoring of nutritional status. The recommended assessments include but not limited to periodic check of body weight, dietary interviews, and serum albumin levels. A renal dietician may offer alternative foods and recommendations to patients that feel limited by fluid, sodium, potassium, and phosphate restrictions. Weight loss and inability to maintain nutritional status are indications for initiating hemodialysis and adequacy of dialysis therapy should be evaluated to avoid anorexia and nausea. Dietary supplements and vitamin and mineral supplements, such as zinc or oral pyridoxine (50 mg/day), might be helpful. Adherence to a regimented diet, in the older patient should be balanced against adequate protein and calorie intake.
Physical Functioning and Frailty
Most older individuals with ESKD have some degree of functional decline or frailty which are associated with high risk for falls, disability, hospitalization, and mortality. Among patients initiating dialysis, frailty has been associated with increased risk of death and shorter time to first hospitalization. Several potential mechanisms have been suggested to explain the association of ESKD with poor functional status and frailty. Elevated inflammatory markers, poor nutritional status, weight loss, and sarcopenia as well as complications related to ESKD such as electrolyte and acid–base
disturbances, hyperphosphatemia, anemia, and bone and mineral disorders are also linked to frailty in ESKD population. Management includes evaluation and treatment of reversible conditions attributable to frailty such as neglect, alcohol abuse, and depression. Resistance exercise training, improvement in nutritional status, and treatment of complications of CKD such as anemia are also suggested to address frailty.
Pruritus
Itching has been reported in up to 40% of patients with kidney failure and can adversely affect sleep and quality of life. The causes of itching in older patients with kidney failure include xerosis, uremic itching, and medication sensitivity. Pruritus is also common in the older dialysis patient, possibly because of skin changes seen with aging. As a part of an approach to managing uremic pruritis, it has been recommended to optimize dialysis treatment by increasing dialysis dose, treating anemia and iron deficiency, and maintaining a low serum phosphate. Treatment of itching caused by xerosis or uremia has been largely symptomatic local treatment consisting of keeping the skin protected and moist with standard skin care recommendations. In addition to these symptomatic approaches, topical capsaicin and pramoxine lotions may also be effective. Low-dose gabapentin has been shown to reduce uremic pruritus among patients with advanced CKD on hemodialysis. Kappa-opioid agonists have demonstrated efficacy reducing itch in randomized placebo-controlled trials among patients undergoing thrice-weekly hemodialysis. Finally, antihistamines such as diphenhydramine or hydroxyzine should be avoided if at all possible due to increased anti-cholinergic adverse events in this age group.
SPECIAL ISSUES
Prognosis and Survival
Kidney failure is associated with a decrease in life expectancy for all age groups when compared to age-matched patients. The survival of older patients undergoing dialysis has been recently assessed showing a 1-year survival rate for octogenarians and nonagenarians after dialysis initiation of 54%. The characteristics strongly associated with death were older age, nonambulatory status, comorbid conditions, and frailty. In addition to survival, it is important to convey the impact of dialysis on quality of life. It is important to convey the limited life expectancy of those patients who are
not transplant candidates, and particularly those with limited functional status and comorbidity. Ultimately it remains up to the individual whether to pursue dialysis or palliative therapy.
There have been increasingly successful attempts to develop and test predictive survival models for outcomes of older patients with ESKD that could be used for shared decision-making. As one example, residing in a nursing home conveys a high risk of death in the first year of dialysis. As shown in Figure 83-5, 87% of nursing home residents died or experienced decreased functional status within 1 year of starting hemodialysis. In addition to place of residence, clinicians can use functional status, other chronic health conditions, and frailty to improve their estimate of prognosis.
Integrated prognostic models can provide additional precision by taking into account laboratory values, comorbidities, changes in clinical factors over time, functional status, frailty, quality of life, and either the patient’s or clinician’s prediction of survival. The “surprise question”—Would you be surprised if your patient died within the next 6 months?—is a strong indicator of mortality, and when combined with serum albumin, age, and two comorbid factors (dementia and peripheral vascular disease) results in an instrument with clinically adequate sensitivity and specificity. The application of prediction tools will help to overcome practitioner uncertainty about prognosis and increase the likelihood of meaningful dialogues between clinicians, older patients, and their families. However, there remains a critical gap among the existing predictive instruments since they uniformly assess only survival. Other issues such as quality of life, functional status, and independence are critical factors for older patients when deciding whether to proceed with RRT.
FIGURE 83-5. Change in functional status after initiation of dialysis. (Reproduced with permission from Kurella Tamura M, Covinsky KE, et al. Functional status of elderly adults before and after initiation of dialysis. N Engl J Med. 2009;361[16]:1539–1547.)
Withdrawal from Dialysis
Stopping dialysis has become a common reason for death in the dialysis population, especially for older patients. Up to one-third of patients are withdrawn or voluntarily withdraw from dialysis therapy annually. The factors associated with dialysis discontinuation in the United States include White race, diabetes, female sex, symptoms, and older age. The reasons cited for withdrawal from dialysis include failure to thrive and medical complications. While dialysis patients should understand that they have the right to stop treatment, the health care team should ensure that the patient is not withdrawing because of an underlying depression or “burdens” that can be ameliorated. It is important to document the decisional capacity of the older adult on dialysis wishing to withdraw and involve family and friends.
While the end-of-life issues in renal failure have received increasing attention, there remain many barriers to management of these issues, as underscored by the fivefold lower use of hospice among patients undergoing hemodialysis compared to those older patients not using dialysis. In order to overcome the barriers to a good death for those on dialysis, it is important to have a multidisciplinary approach to advance care planning and an awareness of hospice services.
Advance Care Planning
Practice guidelines underscore that a complete review of treatment options, prognosis, and quality of life should be included when discussing advance directives of a patient with CKD stages 4 and 5. Having the patient’s wishes expressed prior to starting dialysis makes the burden of decision-making much easier for the families and physicians should the patient become critically ill. It is crucial to involve the family and surrogates in the advance directives process and to fully document the directives in order to avoid a change in direction with an unplanned hospitalization. Patients who decide not to undergo dialysis should have clear documentation in their medical records. There should be similar documentation of do-not-resuscitate orders, wishes regarding artificial nutrition and other measures, and identification of their health care surrogate.
SUMMARY
Providers of older adults face many challenging chronic conditions and CKD can be a primary kidney disease or a cotraveler with diabetes and hypertension. Similar to CKD, AKI is often found in older hospitalized patients and may be related to the primary illness such as sepsis or heart failure. The management of a patient with CKD can be optimized with the use of a multidisciplinary team and appropriate consultation. In most cases, medical management of CKD serves to delay kidney failure and avoid the need for dialysis. For those patients progressing to kidney failure, the management of complications such as anemia and nutrition can help sustain quality of life and functioning. In addressing the treatment options for kidney failure, the older patient faces a range of options from conservative medical management to transplantation and these decisions should be informed by shared decision-making. Primary care and geriatric teams will play a key role in addressing the long-term care and management of these complex medical problems.
FURTHER READING
Burns RB, Waikar SS, Wachterman MW, Kanjee Z. Management options for an older adult with advanced chronic kidney disease and dementia: grand
rounds discussion from Beth Israel Deaconess Medical Center. Ann Intern Med. 2020;173(3):217–225.
Chertow GM, Pergola PE, Farag YMK, et al. Vadadustat in patients with anemia and non-dialysis-dependent CKD. N Engl J Med.
2021;384(17):1589–1600.
Cooper BA, Branley P, Bulfone L, et al. A randomized, controlled trial of early versus late initiation of dialysis. N Engl J Med. 2010;363(7):609– 619.
Cooper CJ, Murphy TP, Cutlip DE, et al. Stenting and medical therapy for atherosclerotic renal-artery stenosis. N Engl J Med. 2014;370(1):13–22.
Hill NR, Fatoba ST, Oke JL, et al. Global prevalence of chronic kidney disease—a systematic review and meta-analysis. PLoS One.
2016;11(7):e0158765.
Kurella Tamura M, Covinsky KE, Chertow GM, Yaffe K, Landefeld CS, McCulloch CE. Functional status of elderly adults before and after initiation of dialysis. N Engl J Med. 2009;361(16):1539–1547.
Navaneethan SD, Zoungas S, Caramori ML, et al. Diabetes management in chronic kidney disease: synopsis of the 2020 KDIGO Clinical Practice Guideline. Ann Intern Med. 2021;174(3):385–394.
Vilaca T, Salam S, Schini M, et al. Risks of hip and nonvertebral fractures in patients with CKD G3a-G5D: a systematic review and meta-analysis. Am J Kidney Dis. 2020;76(4):521–532.
Wheeler DC, Stefánsson BV, Jongs N, et al. Effects of dapagliflozin on major adverse kidney and cardiovascular events in patients with diabetic and nondiabetic chronic kidney disease: a prespecified analysis from the DAPA-CKD trial. Lancet Diabetes Endocrinol. 2021;9(1):22–31.
Chapter
84
Aging of the Gastrointestinal System and Selected Lower GI Disorders
Karen E. Hall
Gastrointestinal (GI) symptoms are common in patients aged 65 and older and can range from mild self-limited episodes of constipation or acid reflux to life-threatening episodes of infectious colitis or bowel ischemia.
According to data from the US Census Bureau in 2005, 45 to 50 million people older than age 65 had at least one GI complaint that impacted their daily life and might result in a medical visit. In older adults, GI disorders, especially those of the large intestine, account for a significant proportion of physician visits, inpatient hospitalizations, and health care expenditure in the United States. Not only are large intestinal disorders common, but in older adults their presentations, complications, and treatment may be different than in younger people. This chapter focuses on changes in the GI tract with aging, and diagnosis and treatment of a variety of intestinal diseases, including diverticular disease, Clostridium difficile–associated diarrhea, microscopic colitis, inflammatory bowel disease, colonic ischemia, colonic obstruction, and lower GI bleeding. Other chapters cover disorders of the upper GI tract (Chapter 85); hepatic, biliary and pancreatic diseases (Chapter 86); and constipation (Chapter 87). GI malignancies, such as gastric cancer and colonic cancer screening and treatment, are covered in Chapter 92.
AGING OF THE GASTROINTESTINAL SYSTEM
Gastroenterology
SECTION D
Older adults may present with unusual or subtle symptoms of serious GI disease due to alterations in physiology with aging. For example, a patient with a GI perforation or colitis may not have guarding or significant abdominal tenderness due to decreased visceral sensitivity.
Learning Objectives
Understand the effects of aging on GI function.
Recognize common presentations of GI dysfunction in older adults.
Understand key differences in diagnosis and treatment for a variety of disorders of the large intestine between younger and older patients.
Determine the most suitable evaluation and management plans for disorders of the large intestine frequently encountered in clinical practice.
Key Clinical Points
Dysmotility in the colon is common in older adults and is often due to a combination of effects of aging and superimposed disease.
Older patients with serious GI disease, such as intestinal ischemia or perforation, may present with subtle symptoms due to age-related visceral hyposensitivity. Thus, the severity of the condition may be underestimated.
The aging process per se has clinically significant effects on GI immunity and GI drug metabolism.
Advanced age is not a contraindication to gastrointestinal endoscopic procedures, and diagnostic testing is relatively high yield.
Some GI dysfunction in older patients can be attributed to the superimposed effects of chronic diseases and environmental/lifestyle
exposures (medications, alcohol, tobacco). A modest decline in function with aging, such as mild constipation, may be significant when side effects of certain medications or concurrent disease are superimposed. The aging process per se has clinically significant effects on oropharyngeal and upper esophageal motility (see Chapter 31), colonic function, GI immunity, and GI drug metabolism (Figure 84-1). On the other hand, because the GI tract exhibits considerable reserve capacity, many aspects of GI function, such as intestinal secretion and absorption, are preserved with aging. A modest decline in gastric mucosal cytoprotection or esophageal acid clearance may become significant when superimposed side effects of certain medications or concurrent disease are also present. Common age-related changes in GI function, such as constipation, can be due to multiple causes such as medications, pelvic floor dysfunction, or comorbidities such as progressive neurodegenerative disorders. Additional research is needed on the effects of aging on the pathophysiology of swallowing disorders, esophageal reflux, dysmotility syndromes, GI immunobiology and microbiome, and the cellular mechanisms of neoplasia in the GI tract. Animal studies provide important insights into the cellular physiology of aging, despite the issue of species variation.
FIGURE 84-1. Effects of physiologic aging on the gastrointestinal tract. This schematic diagram summarizes significant effects of aging on various divisions of the gastrointestinal tract. Key: up arrow, increased; down arrow, decreased. LES, lower esophageal sphincter; UES, upper esophageal sphincter.
Small Intestinal Function
Small bowel function appears to be relatively preserved in normal human aging. The small intestine has a large functional reserve capacity, because of the substantial mucosal surface area available for secretion and absorption. Changes in small bowel epithelial development and intestinal absorption with aging have been described and are summarized in Table 84-1.
TABLE 84-1 ■ SMALL INTESTINAL FUNCTION
In the absence of significant small bowel damage or surgical resection, these changes are unlikely to result in significant weight loss or malnutrition.
Colonic Function
Aging is associated with diverse effects on the large intestine including alterations in mucosal cell growth, differentiation, metabolism, and immunity (Table 84-2).
TABLE 84-2 ■ COLONIC FUNCTION
These changes likely contribute to the observations of increased constipation and risk of malignancy in older adults.
Gastrointestinal Immunity
The GI tract is the largest immunological system in mammals. Older people appear to be more susceptible to infections that enter the body via the GI tract, suggesting that aging may impair mucosal immunity. The GI mucosal immune response in the small intestine is a complex process that involves a series of events: antigen uptake and presentation of antigen at the mucosal surface by specialized epithelial cells (M cells) overlying Peyer patches in the small intestine; differentiation and migration of immunologically competent lymphocytes to the lamina propria; regulation of local antibody production in the intestinal wall; and mucosal epithelial cell receptor– mediated transport of antibodies to the intestinal lumen. Table 84-3 summarizes the effect of aging on intestinal immunity.
TABLE 84-3 ■ GASTROINTESTINAL IMMUNITY
Gastrointestinal Drug Metabolism
Older patients are at increased risk for drug interactions and adverse drug reactions, primarily because of the large number of drugs and known side effects of drugs prescribed in this age group. While most drug metabolism occurs in the liver, usually via the cytochrome P450 system, there is expression of the CYP3A subfamily in the GI tract. This subclass oxidizes a wide variety of drugs and toxins including procarcinogens such as aflatoxins, calcium channel antagonists, immunosuppressant agents, cholesterol- lowering agents, benzodiazepines, nonsedating antihistamines, and macrolide antibiotics. CYP3A activity is reduced by 25% to 50% in aged individuals, explaining some of the risk for adverse drug effects.
COMMON INTESTINAL DISORDERS
Diagnosis of GI disorders in an older adult patient poses several challenges to the physician. First, comorbid illnesses are frequent and often numerous, and some such as dementia and depression may impair adequate communication between patient and caregiver. Second, medications and their side effects may cloud the clinical picture as polypharmacy is common in older adults.
Differences in usual diagnosis for common presenting lower GI symptoms among older adults are summarized in Table 84-4. For example, rectal bleeding in a young person is most commonly from hemorrhoids or inflammatory bowel disease (IBD). In older adults, diverticulosis, ischemic colitis, or colon cancer more commonly cause rectal bleeding. A complete
and thorough history is imperative in older adults. Subtle clues to the diagnosis are sometimes dismissed as physiologic aspects of aging. Physical examination and some laboratory tests including tests of liver function are unaffected by aging, and any abnormality should be evaluated for the presence of a disease state and not dismissed as an age-related change.
TABLE 84-4 ■ INFLUENCE OF AGE ON LIKELY DIAGNOSIS OF LOWER GASTROINTESTINAL SYMPTOMS
Colonoscopy Colonoscopy in older adults is safe and well tolerated. Several studies of indications and outcomes of patients older than 80 years having elective and emergency endoscopic procedures found those tests to be safe. Moreover, the yield for diagnostic testing with colonoscopy in older adults is relatively high. Colonoscopy is often done for colon cancer screening (see Chapter 92).
Adequate bowel preparation is critical to a successful colonoscopic examination. Bowel cleansing in older adults should be performed with care. Preparation with standard doses of polyethylene glycol–based lavage solutions (PEG-ELS) in older adults is well tolerated and produces satisfactory bowel cleansing in more than 95% of all cases. Split dose regimens, where a part of the preparation is given 6 to 8 hours before the procedure, are recommended for optimal bowel cleansing. Sodium phosphate osmotic laxative preparations should not be used in older adults,
as they cause significant fluid shifts and may cause electrolyte abnormalities or phosphate nephropathy and renal failure in this subset of patients.
Most colonoscopies are performed under moderate sedation. Sedation for colonoscopy usually includes a combination of a benzodiazepine (midazolam or diazepam) and a narcotic (meperidine or fentanyl), or may include a short-acting anesthetic agent such as propofol. As older adults may be more sensitive to these agents, small incremental doses should be given and the patient monitored closely for signs of cardiopulmonary compromise.
Endoscopic ultrasound Endoscopic ultrasound (EUS) can be used to diagnose and manage disease of the anorectum and usually does not require sedation. EUS may be helpful in evaluating the layers of the rectal wall, the internal and external anal sphincters, and the pelvic floor muscles in patients with fecal incontinence. EUS is frequently used to stage rectal malignancy, providing information about the depth of tumor invasion and the status of regional lymph nodes. Direct tissue sampling is available through fine needle aspiration and biopsy at the time of the EUS.
Capsule endoscopy Wireless capsule endoscopy is an increasingly important test in the evaluation of obscure GI bleeding. The capsule transmits images of the small bowel to a hard drive that the patient wears, and following the completion of the study the images are downloaded to a computer for review. Common findings in older adults being evaluated for obscure bleeding or iron deficiency anemia include angioectasias and ulcers.
Contrast studie s Contrast studies of the large intestine involve coating the colonic mucosa with a contrast medium, usually barium sulfate, following thorough colonic preparation. Barium enemas may be performed by either the single- or double-contrast method; in the latter, air is insufflated as well as barium. The single-contrast technique often is used to diagnose colonic strictures, fistula, obstruction, or diverticulitis. Double-contrast barium enema more commonly is used to detect polyps or mucosal abnormalities.
CT colonography CT colonography (virtual colonoscopy) is a radiographic technique that combines helical CT and graphics software to create a three- dimensional view of the colonic lumen. This technology was developed to detect colonic polyps. In clinical trials of CT colonography, detection rates for polyps greater than 5 mm are similar to those for optical colonoscopy.
Specific recommendations for use in colon cancer screening are covered in Chapter 92.
Diverticular Disease
Colonic diverticula are herniations of colonic mucosa through the smooth muscle layers of the colon. Strictly speaking, because colonic diverticula do not involve the muscle layer but rather are herniations of the mucosa and submucosa, they are actually pseudodiverticula. Diverticulosis has been increasingly recognized in Western society, and prevalence in the descending and sigmoid colon increases with age. Diverticula are present in approximately one-third of persons by age 50 and in approximately two- thirds by age 80. In Western society, the majority of diverticula occur on the left side of the colon, specifically the sigmoid colon, although diverticula can occur anywhere in the colon.
Pathophysiology There are three factors implicated in the pathogenesis of colonic diverticulosis. First, altered colonic motility results in increased luminal pressure along segments of the colon, and the resulting high-pressure areas cause outpouchings at areas of weakness. Second, low intake of dietary fiber may contribute because low stool weights and slower stool transit times allow for relative increases in colonic intraluminal pressure. Third, with age the structural integrity of the colonic muscular wall decreases, and diverticula are more likely to form.
Asymptomatic diverticulosis Diverticulosis is usually an incidental finding in patients undergoing radiographic studies or colonoscopy for other reasons. There is no indication for therapy or follow-up in such patients. Large cohort studies suggest that complications of diverticular disease may be prevented by intake of a high-fiber diet.
Painful diverticular disease Some patients with diverticulosis have left lower quadrant pain, and when examined, do not have evidence of inflammation. These patients may have painful diverticular disease. Pain often is described as crampy, located in the left lower abdomen, and may be associated with diarrhea or constipation as well as tenderness over the affected area. The pain is often exacerbated by eating and diminished by defecation or the passage of flatus. The symptoms of painful diverticular disease often overlap with those of irritable bowel syndrome, and therefore painful diverticular disease is considered part of the spectrum of functional bowel disorders. It is important to consider other causes of left lower quadrant pain such as diverticulitis, colonic obstruction, and incarcerated hernias in such patients.
Diverticulitis Diverticulitis, defined as having diverticulosis in association with inflammation, infection, or both, is probably the most common clinical manifestation of diverticular disease. Diverticulitis develops in approximately 10% to 25% of individuals with diverticulosis who are followed for 10 years or more; however, less than 20% of these patients require hospitalization.
The process by which a diverticulum becomes inflamed has been compared to appendicitis, in which the diverticulum becomes obstructed by stool in its neck. The resulting obstruction eventually leads to micro- or macroperforation of the diverticulum. Fever, leukocytosis, and rebound tenderness often ensue. Due to decreased visceral sensation, older patients may have reduced rebound, and the white blood cell (WBC) may not be elevated; therefore, an aggressive evaluation is indicated if this diagnosis is suspected. Segmental colitis associated with diverticulitis (SCAD) has also been increasingly recognized as a cause of abdominal pain. Unlike diverticulitis, inflammation in SCAD involves the mucosa between diverticuli, and spares the diverticular orifice. The symptoms include chronic diarrhea, intermittent bleeding, and pain. There is evidence this entity may be an overlap with IBD.
Clinical guidelines recommend abdominal and pelvic CT scan for diagnosis. Colonoscopy should also be performed in older patients, but should be delayed until inflammation has improved because of an increased risk of colonic perforation. Patients with severe pain, nausea, and vomiting often require hospitalization. Antibiotics should be used selectively rather than routinely in older patients who are immune-competent and who have mild disease. Most patients with diverticulitis will improve within 48 to 72 hours. Selected patients with relatively mild symptoms and who are able to tolerate oral intake may be managed with close outpatient monitoring and intake of clear fluids. Given the high incidence of complicated disease and effects of dehydration in older adult patients with diverticulitis, there should be a low threshold for hospitalization.
Patients with complicated disease, such as abscesses, may need drainage by surgery or interventional radiology. Surgery is recommended for patients with diverticulitis who fail to respond to medical therapy within 72 hours, and for those with refractory abscess, obstruction, or when the inflammatory process involves the bladder. Elective segmental resection of the colon should not be based on number of episodes of diverticulitis, but should be
customized based on severity of disease, patient preference, and risks and benefits.
Diverticular hemorrhage Three to five percent of patients with diverticulosis have hemorrhage from a diverticulum. Diverticular hemorrhage is the most common identifiable cause of significant lower GI bleeding, accounting for 30% to 40% of cases with confirmed sources.
Bleeding associated with diverticula is typically brisk and painless.
While the majority of diverticula are located in the left colon, bleeding from diverticular disease usually arises from the right colon. Bleeding is said to arise from arterial rupture of the vasa recta as it courses over the dome of a diverticulum. Bleeding ceases spontaneously in 70% to 80% of patients, and rebleeding rates range from 22% to 38%. Rebleeding is more likely when the initial bleed is severe.
The initial step in the management of patients with hemodynamically significant bleeding from diverticulosis is stabilization with intravenous fluid and blood products as necessary. Stable patients with suspected diverticular hemorrhage may undergo colonoscopy following rapid colonic purge.
Colonoscopy in this setting can identify a diverticular source, exclude alternative diagnoses, and provide therapy of actively bleeding lesions.
In patients with recurrent bleeding, nuclear tagged red blood cell scans (scintigraphy) or CT angiography may localize the bleeding site. A positive bleeding scan may lead to angiography, which may allow for nonsurgical management of diverticular hemorrhage. Patients who require more than three units of packed red cell transfusions over 24 hours, have bleeding refractory to treatment, or are hemodynamically unstable may require surgical management. Preoperative nuclear red blood cell scans or angiography often help localize the diseased segment and allow for limited bowel resections. Blind total colectomy is rarely indicated.
Diarrhea
Diarrhea, while less common than constipation, causes significant morbidity in older adults. The etiology of acute diarrhea (lasting < 2 weeks) is similar in older versus younger adults, with a few exceptions. Most cases of acute diarrhea are related to viral or bacterial infection, but it can also be caused by medications, medication interactions, or dietary supplements.
Chronic diarrhea, lasting more than 2 weeks, may result from fecal impaction, medications, irritable bowel syndrome, microscopic or
lymphocytic colitis, IBD, obstruction from colon cancer, malabsorption, small bowel bacterial overgrowth, thyrotoxicosis, or lymphoma. Many of these conditions are not due to changes with aging per se, but due to superimposed disease. Older patients may present with new-onset fecal urgency and frequency that is similar to diarrhea-predominant irritable bowel syndrome. The onset may coincide with an acute diarrheal illness caused by viral or bacterial infection. However, many patients continue to have distressing fecal urgency for weeks or months, resulting in considerable lifestyle impairment. Many curtail their travel and social activities outside the home for fear of fecal incontinence. The etiology is often multifactorial.
Side effects of medications that alter small bowel and colonic motility should always be considered. Decreased rectal compliance occurs with aging, and may contribute to sensations of fecal urgency in these patients. Screening for fecal impaction resulting in constipation is always warranted in older patients, as it is often overlooked. Diarrhea and fecal urgency should not be attributed simply to aging as it is often due to superimposed disease or other potentially treatable causes.
Clostridium Difficile Colitis
C difficile, an anaerobic gram-positive, spore-forming toxigenic bacillus was first isolated in 1935. It was not until 1978 when the association between the toxin elaborated by this bacterium and antibiotic-associated pseudomembranous colitis was made. The organism is now recognized as the single most important cause of nosocomial infectious diarrhea in the United States. C difficile colitis is more prevalent in older adults because of
more frequent hospitalizations, increased antibiotic use, and increased time in institutional settings. C difficile colonization in long-term care facilities has been estimated to be at least 50% in the United States.
Pathogenesis often involves exposure to an agent that alters normal colonic
flora such as an antibiotic. While the most common antibiotics associated with C difficile colitis are ampicillin, amoxicillin, cephalosporins, and clindamycin, virtually all antibiotics (including those used to treat C difficile colitis) have been implicated in causing disease.
For this reason, efforts to decrease infection have focused on decreasing the reflexive or routine use of antibiotics in high-risk populations such as residents of nursing facilities. C difficile infection may result in an asymptomatic carrier state, or patients may develop diarrhea and colitis.
Patients with intact immune systems and an ability to mount an early antibody response to C difficile toxin usually become asymptomatic carriers of the organism. On the other hand, patients lacking sufficient ability to mount an adequate immune response develop diarrhea and colitis.
Risk factors for the development of C difficile colitis are summarized in
Table 84-5.
TABLE 84-5 ■ RISK FACTORS FOR CLOSTRIDIUM DIFFICILE
INFECTION
C difficile infection disproportionally affects older patients.
Approximately two out of three infections occur in patients aged 65 or older, and older people are also at high risk for recurrent infection. Risk factors that may especially predispose older patients include exposure to systemic antibiotics used to treat other diseases, contact with bacterial spores as a result of frequent health care exposure, and age-related changes in the immune system. Decreased functional status is also an independent risk factor for severe infection.
Clinical manifestations of C difficile infection range from asymptomatic carriage to mild to moderate diarrhea to life-threatening pseudomembranous colitis (Figure 84-2). While there is no evidence that age per se is a risk for asymptomatic carriage of C difficile, older patients appear to be at risk for developing severe disease including complications such as colonic ileus or toxic dilation.
FIGURE 84-2. Clinical manifestations of Clostridium difficile infection.
Diagnostic tests There are several tests for diagnosing C difficile colitis, which appear to be equally effective in older versus younger patients. The most widely used is enzyme-linked stool immunoassay directed against one of the two C difficile toxins. While the main advantages are speed, cost, ease of testing, and high specificity, this immunoassay has relatively low sensitivity. Polymerase chain reaction tests that detect bacterial antigens or toxin genes are available; however, these tests may be positive in asymptomatic carriers and use varies by institution. Other diagnostic tests including C difficile stool culture and tissue culture cytotoxic assay are less commonly used because of their high cost, need for specialized laboratory techniques, and length of time to make the diagnosis, but may be helpful if stool toxin assay results are equivocal.
Colonoscopy, or more often flexible sigmoidoscopy, may be helpful in making the diagnosis of C difficile colitis, but is usually not necessary.
Endoscopy is most useful when the diagnosis is in doubt or when disease severity demands rapid diagnosis. The finding of colonic pseudomembranes in a patient with antibiotic-associated diarrhea is almost pathognomonic for C difficile colitis (Figure 84-3).
FIGURE 84-3. Pseudomembranes.
Treatment Therapy for C difficile colitis begins with withdrawal of the precipitating antibiotics if possible. Older patients treated with metronidazole appear to have a high risk of treatment failure and disease recurrence. Thus, guidelines from the Infectious Diseases Society of America recommend use of oral vancomycin or fidaxomicin to treat C difficile– associated disease rather than metronidazole. The usual initial therapy in mild-moderate disease is a 10-day course of either oral vancomycin 125 mg four times daily or oral fidaxomicin 200 mg twice daily. This is effective in treating the majority of patients. Patients who are very ill should receive oral vancomycin 500 mg four times daily and IV metronidazole 500 mg every 8 hours. Vancomycin retention enemas (500 mg in 100 mg normal saline every 6 hours) can be used in patients who cannot tolerate oral medications.
Intravenous vancomycin does not penetrate the colonic lumen and is not effective in treating C difficile colitis. Probiotic agents such as
Lactobacillus GG and Saccharomyces boulardii have been used to reconstitute the colonic microflora, and are occasionally added to metronidazole or vancomycin to treat C difficile colitis, but their effectiveness has not been demonstrated in well-designed trials.
Bezlotoxumab, a monoclonal antibody targeting C difficile toxin B, offers an option for treatment in patients with disease refractory to other treatments, although experience in older patients is limited.
Unfortunately, recurrent C difficile infection is a common problem, and older patients are at increased risk. Symptomatic recurrence may result from relapse of the same strain (usually within 28 days) or reinfection with a different strain of C difficile. Resistance to initial treatment is seldom an important factor in recurrence. Therefore, patients with recurrent C difficile colitis generally are given another trial with the antibiotic used to treat the initial infection. In some patients, a prolonged taper of vancomycin may be needed to prevent further recurrence (Table 84-6). Fecal microbiota transplantation is highly effective in the treatment of recurrent C difficile infection in older patients with disease refractory to usual vancomycin taper and use of oral probiotics, and is also very effective therapy for selected patients with severe or fulminant disease (Table 84-7).
TABLE 84-6 ■ VANCOMYCIN TAPER FOR SECOND RELAPSE OF
CLOSTRIDIUM DIFFICILE COLITIS
TABLE 84-7 ■ INDICATIONS FOR FECAL MICROBIOTA TRANSPLANTATION FOR RECURRENT CLOSTRIDIUM DIFFICILE INFECTION
Microscopic Colitis
The term microscopic colitis refers to two distinct clinical entities with similar presentation, namely lymphocytic and collagenous colitis. They are characterized by chronic watery diarrhea and feature histologic evidence of chronic mucosal inflammation in the absence of endoscopic or radiological abnormalities of the large intestine. They differ principally by the presence or absence of a thickened collagenous band, which when present in collagenous colitis is located in the colonic subepithelium.
Both lymphocytic and collagenous colitis occur most commonly in people in their sixth to eighth decade with a strong female predominance. Most patients present with chronic watery stools for months to years. The pattern of symptoms in patients with microscopic colitis fluctuates and consists of exacerbations and remissions over years. Crampy abdominal pain is common, and symptoms often improve with fasting. Patients are more likely to be active smokers and use drugs such as NSAIDs and acid- suppressing medications for heartburn. Physical examination is usually unremarkable, and occult blood in the stool is uncommon. Colonoscopy is usually normal. It is important to exclude infectious causes of diarrhea by testing the stool for ova and parasites, bacterial pathogens, and C difficile toxin prior to making the diagnosis of microscopic colitis. The diagnosis relies on histopathologic evaluation of biopsied material from the diseased colon.
Treatment Table 84-8 summarizes treatment options for microscopic colitis. One-third of patients respond to antidiarrheal agents such as loperamide as well as stool bulking agents like psyllium or methylcellulose; however, these
agents do not improve the subepithelial inflammation or reduce the thickness of the collagen band. American Gastroenterology guidelines (2016) recommend the oral steroid budesonide 9 mg daily to induce remission, as several clinical trials showed benefit versus no treatment or treatment with mesalamine. Budesonide reduces inflammation in the bowel and is used to treat IBD such as ulcerative colitis (UC) or Crohn disease. Budesonide is also the first-line agent recommended to continue remission of microscopic colitis. If budesonide is contraindicated or is not affordable, other medications listed in Table 84-8 can be used as second-line treatments. Oral prednisone, while effective in treating symptomatic microscopic colitis, is less optimal due to its many adverse effects in older people. In severe refractory patients, diverting ileostomy or proctocolectomy is a treatment of last resort.
TABLE 84-8 ■ TREATMENT OF MICROSCOPIC COLITIS
Inflammatory Bowel Disease
Crohn disease and UC comprise the vast majority of IBD (Table 84-9). They are characterized by inflammation within the GI tract. IBD commonly has its onset in the young adult population, but is found with increasing frequency in older adults. As discussed in the section on diverticulitis, inflammation may present as segmental colitis, usually in the left colon, associated with diverticuli (SCAD) but involving the interdiverticular mucosa. The older patient may present with new-onset diarrhea, abdominal pain, intermittent
bleeding, or symptoms of chronic complications of IBD such as anemia, weight loss, or obstruction in small bowel or colon. There is considerable overlap between these symptoms and other conditions such as bowel ischemia and cancer that are more common in older patients; therefore, work-up requires imaging of the bowel and endoscopy of the affected bowel if possible. An increasing cohort of older patients with IBD were diagnosed
with IBD at a younger age and have been aging with disease. They are at risk for development of long-term complications such as colonic cancer and require additional surveillance compared to low-risk older patients. Most patients will have long-term involvement of a gastroenterologist; however, geriatricians and other primary care providers should be aware of common issues affecting IBD patients.
TABLE 84-9 ■ DIFFERENTIATING CROHN DISEASE AND ULCERATIVE COLITIS
There appears to be a bimodal distribution of the age of onset, with the peak incidence of IBD occurring in the second and third decades, and a second smaller peak in older adults between the ages of 60 and 70. “Late- onset” IBD accounts for approximately 12% cases of UC and 16% cases of Crohn disease.
Crohn disease Crohn disease is a chronic inflammatory process of unknown etiology, which most often affects the distal ileum, but can affect any segment of the GI tract from mouth to anus. It is characterized by transmural inflammation of the bowel wall, the presence of aphthae and ulcers, and the interspersing of segments of involved bowel with uninvolved bowel (skip lesions). Fissures, fistulas, and strictures are common in Crohn disease.
Crohn disease of the colon, also known as Crohn colitis, is more common in older than in younger adults.
Symptoms and Signs The presentation of Crohn disease may be subtle and varies considerably. In older adults, Crohn disease often primarily involves the colon. Consequently, features usually associated with small bowel disease such as intestinal obstruction, perforation, and fistula are less common. The majority of older adult patients with Crohn disease have abdominal pain, weight loss, fever, and diarrhea. The diarrhea, typically watery in Crohn disease of the small bowel, can be bloody when the colon is involved.
Laboratory abnormalities such as anemia, leukocytosis, thrombocytosis, hypoalbuminemia, elevated erythrocyte sedimentation rate, and C-reactive protein vary with the severity of the illness. Interestingly, anemia in Crohn disease can be because of either iron deficiency from chronic GI blood loss or from vitamin B12 deficiency if the Crohn disease involves a large segment
of the distal ileum, the site for vitamin B12 absorption in the small bowel.
Unfortunately, no single symptom, sign, or diagnostic test definitively establishes the diagnosis of Crohn disease, and prolonged delays in diagnosis may occur more frequently in older adults. In the end, a constellation of suggestive symptoms and laboratory abnormalities should prompt further evaluation. Common intestinal infections should be excluded by stool cultures, stool examination for ova and parasites, and assays for C difficile toxin.
Ultimately, the diagnosis of Crohn disease is confirmed by findings from CT or MR enterography, colonoscopy, and histopathology. These studies can identify the characteristic linear ulcers, skip lesions, and mucosal edema in Crohn disease. Enterography can identify pyogenic complications like abscesses and perforations, and also can detect other intra-abdominal pathology that might mimic the presentation of Crohn disease, such as appendicitis or nephrolithiasis.
Management The principles of IBD management are the same regardless of the age of the patient and are summarized in Table 84-10. The most commonly used medications in the treatment of Crohn disease include sulfasalazine, mesalamine (5-aminosalicylic acid), and corticosteroids; all of these are well tolerated in the older adult population. However, corticosteroid use confers a higher risk of complications.
TABLE 84-10 ■ DOSES AND ADVERSE REACTIONS WITH COMMONLY USED MEDICATIONS TO TREAT INFLAMMATORY BOWEL DISEASE
Immunomodulators such as azathioprine, 6-mercaptopurine, and methotrexate can be used effectively to maintain remission of Crohn disease. Although these agents are usually well tolerated in older adults, their use in older patients is low compared to younger patients. Concerns include increased risk of infection or malignancy such as nonmelanoma skin cancers associated with their use. Biologic agents, such as anti-Tumor Necrosis Factor (TNF) drugs infliximab and adalimumab, have been used to manage moderate to severe Crohn disease; however, their use is also less common in older adults. Anti-TNF agents are contraindicated in patients with class III and IV heart failure. Other newer agents such as integrin antagonists, interleukin antagonists, and Janus Kinase inhibitors have shown benefit in treating both Crohn disease and UC; however, use in older patients is very limited.
Antibiotics such as metronidazole and ciprofloxacin are effective in inducing and maintaining remission, as well as for healing perineal fistulas, in patients with Crohn disease. The long-term use of antibiotics typically is limited by the occurrence of significant side effects. Specifically,
irreversible peripheral neuropathy can occur with the use of metronidazole, while antibiotic-associated diarrhea may be a complication of prolonged ciprofloxacin use.
Older adults with ileal or ileal–colonic Crohn disease occasionally require intestinal resection, but generally tolerate surgery well and appear to have low rates of postoperative recurrence. Proctocolectomy with ileostomy is a common surgical option for patients with extensive Crohn colitis. In older adult patients who are debilitated or malnourished, an initial subtotal colectomy with ileostomy is less debilitating and permits weight gain and improved physical well-being. If proctocolectomy is subsequently required, it can be done with a low complication rate, but may not be necessary at all if rectal disease is absent. A conventional ileostomy is generally favored in older patients following colectomy, because anal sphincter sparing surgical procedures, such as an ileal pouch–anal anastomosis, often have poor functional results in older patients.
Ulcerative colitis UC is a chronic inflammatory disorder of the GI tract of unknown etiology that affects the mucosa and submucosa of the large intestine in a continuous fashion. The inflammatory process invariably involves the rectum and extends proximally to variable distances, but does not involve the GI tract proximal to the colon. For many older patients, UC is a relatively mild illness, because colonic inflammation often is limited to the rectum or sigmoid colon. This distribution of disease is generally associated with less systemic manifestations, better response to medical therapy, and less need for surgery than more extensive UC.
Symptoms and Signs The severity of UC may be subjectively classified as mild, moderate, or severe and is generally proportional to the extent of colonic inflammation. Symptoms in older adults are similar to those seen in young patients, and include bloody diarrhea, rectal pain, tenesmus, urgency, and abdominal pain. In comparison to Crohn disease, the diarrhea in UC almost always is bloody. Fecal urgency, a sensation of incomplete evacuation, and fecal incontinence also are common. Unfortunately, older patients appear to be more likely than younger patients to present with a severe initial attack, and that first severe manifestation is associated with a relatively high fatality rate.
Laboratory findings in UC are nonspecific and reflect the severity of the underlying disease. In patients with limited distal disease, laboratory abnormalities may be absent except perhaps for mild anemia. In patients with
extensive disease, severe iron deficiency anemia, hypoalbuminemia, leukocytosis, and thrombocytosis are common.
Toxic megacolon is a feared complication of UC, and it occurs more frequently in older patients. One should be suspicious of toxic megacolon in a patient whose diarrhea improves but whose abdomen is distended and tympanic. Other markers of worsening systemic inflammation, such as fever and leukocytosis, will also be present. The diagnosis is usually made by abdominal radiography or CT imaging. Colonoscopy should not be attempted when there is a suspicion for toxic megacolon, as perforation may ensue.
Similar to the situation in Crohn disease, there is no single test that can definitively diagnose UC with acceptable sensitivity and specificity. In older adults, it is important to exclude other diseases that may mimic UC, such as ischemic colitis, radiation proctocolitis, diverticulitis, malignancy, and infectious colitis.
Endoscopic examination can demonstrate the classic findings of diffuse erythema, mucosal edema, granular mucosa, and ulcerations starting in the rectum without intervening areas of normal mucosa (Figure 84-4). In the proper clinical setting, flexible sigmoidoscopy with biopsy is usually sufficient to establish a diagnosis of UC. Complete colonoscopy with ileoscopy is necessary to determine the extent of disease and to exclude Crohn disease. However, complete colonoscopy is not recommended in patients with active UC for fear of perforation; the procedure can be safely performed once active disease has been controlled.
FIGURE 84-4. Ulcerative colitis as seen on colonoscopy.
Management Most older adult patients with UC respond favorably to medical management. Once in remission, relapse occurs less frequently in older adults regardless of the severity of the initial attack. The mainstays of treatment for UC are aminosalicylates, and they may be administered orally or rectally. Formulations designed either for enema or suppositories are reasonable choices when treating distal disease. Unfortunately, distal disease in older patients can be refractory to topical therapy, and so older subjects with distal disease may require oral formulations of aminosalicylates to achieve and maintain remission.
In addition to aminosalicylates, corticosteroids are effective in achieving remission. As steroid use is associated with frequent side effects, they should only be used temporarily in UC as a means to induce remission. Steroids have not been shown to be effective at preventing relapses, and their side effect profile makes prolonged use unsatisfactory.
In patients who do not respond to aminosalicylates, immunomodulators such as azathioprine or 6-mercaptopurine should be used. Patients may require up to 6 to 12 weeks to see an effect from these agents. However, once
a response is achieved, they are effective at maintaining remission. Use of other immune agents and biologics, as discussed above for Crohn disease, should be individualized as there is limited data on efficacy and tolerance in older adults.
Surgery for UC is indicated in patients who fail medical therapy, have acute fulminant disease, are steroid dependent, or develop a dysplastic lesion or cancer. UC is cured following total proctocolectomy. In older adults, total proctocolectomy with ileostomy remains a popular choice, because restorative procedures like ileo–anal anastomosis are limited by functional morbidity. An alternative surgical procedure is a subtotal colectomy, which leaves a rectal stump that provides the patient with an improved chance for fecal continence. Such patients, however, continue to have colonic mucosa, and thus have an ongoing increased risk for colon cancer to develop in the diseased segment. Practitioners with older patients that have had subtotal colectomy need to be aware of the need for continued frequent cancer surveillance of the rectal stump.
Colon cancer in IBD The risk of colorectal cancer (CRC) in older patients with long-standing IBD is a significant complication of the disease, with rates of early and missed CRC up to three times that of non-IBD older patients. Colon cancer rates generally are higher in patients with UC than those with Crohn disease; it appears to be the degree and extent of ongoing inflammation in the colon that confers an increased risk of colon cancer. Patients with Crohn colitis are believed to have an equally high risk of developing colon cancer as their UC peers. Cumulative risk of CRC increases with disease duration in UC from 2% after 10 years to 18% after 30 years of disease. The older IBD patient with long-standing colonic disease should be considered at high risk for CRC. Surveillance for CRC includes colonoscopy with random mucosal biopsies of the entire colon every 1 to 3 years starting 8 to 10 years after onset of disease. During surveillance, patients found to have low- or high- grade dysplasia or carcinoma generally are offered proctocolectomy. No specific age recommendations for discontinuing IBD cancer surveillance are available. However, discontinuing routine CRC surveillance is generally recommended when life expectancy falls below 10 years. A study in 2019 found that patients with long-standing IBD and two consecutive negative screening endoscopies had no advanced colorectal neoplasia with an average follow up of 4 years. Therefore, it may be possible in future to identify a lower risk IBD cohort that could be followed less frequently.
Small Bowel Ischemia
Small bowel ischemia is uncommon and is usually due to superior mesenteric artery (SMA) obstruction. Sudden obstruction by clots results in complete ischemia of the majority of the small bowel and presents as a catastrophic event. Usually patients are unstable, and older patients have a very poor prognosis. Slowly progressive obstruction presents with increasing cramping pain with eating, termed intestinal angina, and in later stages with diarrhea after eating. Patients often lose weight due to food avoidance. The diagnosis is often overlooked due to the nonspecific symptoms early on. Angiography is needed; however, CT angiography is increasingly used as it is safer and less invasive. Angioplasty can often be performed and is a safer option than vascular surgery in older patients.
Colon Ischemia
Colon ischemia (CI) is the most common intestinal vascular disorder in older adults. CI encompasses a spectrum of injury. The specific conditions resulting from ischemic injury to the colon are classified as reversible or irreversible, and then can be characterized further as reversible ischemic colonopathy, reversible or transient ischemic colitis, chronic ulcerative ischemic colitis, ischemic colonic stricture, colonic gangrene, and fulminant universal ischemic colitis.
Pathophysiology The colon receives its blood supply from branches of the SMA and inferior mesenteric artery. The colon is protected from ischemia by an abundant collateral circulation formed by the marginal arterial complex of Drummond, central anastomotic artery, and arc of Riolan. Occlusion of a major vessel results in opening of collateral pathways in response to arterial hypotension distal to the occlusion. Increased blood flow through collateral pathways maintains adequate perfusion for a variable but brief period of time. If blood flow is diminished for a prolonged period, vasoconstriction develops in the affected bed and may persist after the primary cause of the mesenteric ischemia is reversed.
In most cases, the cause of an episode of CI cannot be established with certainty, and no vascular occlusion can be identified. The causes of CI are vast (Table 84-11) and include thrombosis, embolus, shock, volvulus, hematologic disorders, infections, trauma, surgery, as well as several medications (Table 84-12). The colon is particularly susceptible to
ischemia, perhaps owing to its relatively low blood flow during periods of functional activity and its sensitivity to autonomic stimulation.
TABLE 84-11 ■ CAUSES OF COLONIC ISCHEMIA
TABLE 84-12 ■ MEDICATIONS ASSOCIATED WITH COLON ISCHEMIA
Clinical feature s Many cases of transient or reversible ischemia still are missed because diagnostic studies are not performed early enough in the course of disease. This is because patients may not seek medical advice for a disease that is self-limited, or the initial symptoms may be confused with other conditions such as IBD.
Approximately 90% of persons with CI are older than age 60 and have widespread evidence of atherosclerosis. Up to 10% of patients may have a potentially obstructing lesion of the colon, including carcinoma, benign stricture, and diverticulitis. Patients with CI usually are not critically ill at the time of diagnosis, and their abdominal pain typically is mild. Mesenteric angiography plays little role in the diagnosis and management of this condition, since colonic blood flow usually has normalized by the time of
presentation. In contrast to small bowel ischemia, the prognosis is often excellent.
Typically, CI presents with the sudden onset of mild crampy left lower quadrant abdominal pain. The pain frequently is accompanied, or followed within 24 hours, by bloody diarrhea or bright red blood per rectum. In most cases, blood loss is minimal; hemodynamically significant bleeding should prompt consideration of other diagnoses, such as diverticular bleeding.
Severe pain is unusual and may indicate irreversible transmural necrosis. The differential diagnosis of CI includes infectious colitis, IBD, pseudomembranous colitis, diverticulitis, and colon carcinoma. In all patients suspected of having colonic ischemia, infection with organisms such as Salmonella, Shigella, Campylobacter, and Escherichia coli O157:H7 should be excluded. In fact, E coli O157:H7 infection induces a colitis that mimics or may even cause CI.
American College of Gastroenterology guidelines from 2015 recommend that an older patient who presents with the sudden onset of abdominal pain and rectal bleeding or bloody diarrhea should undergo abdominal and pelvic CT with intravenous and oral contrast as this will demonstrate colonic wall abnormalities such as bowel wall thickening, edema, thumbprinting, or pneumatosis. If acute mesenteric ischemia (AMI) is suspected due to severe wall abnormalities, laboratory evidence of inflammation and renal impairment, or cardiovascular instability, then multiphasic CT angiography should be performed rapidly. If CT angiography is not diagnostic, then conventional angiography can be attempted. Early surgical consultation should be considered for older patients, as delayed surgery results in much poorer outcomes compared to younger patients. Colonoscopy should be avoided acutely due to increased risk of perforation. Colonoscopy can be performed 24 to 48 hours later if safe to do so to demonstrate mucosal abnormalities and obtain histopathologic mucosal samples. Conventional sigmoidoscopy has some risk of perforation and is of value only if the segment of involved bowel is within reach of the sigmoidoscope; CI involves the sigmoid in 50% to 60% of patients and the rectum in less than 10% of cases. At the outset, purplish blebs representing mucosal and submucosal hemorrhage may be seen. As hemorrhage is absorbed, varying degrees of necrosis, inflammation, ulceration, and mucosal sloughing occur, resembling UC or Crohn disease.
Management Treatment of CI is based on early diagnosis and continued monitoring, with special attention to the radiologic or colonoscopic appearance of the colon for diagnosis and to demonstrate either improvement or progression to chronic ischemic colitis or stricture. Guidelines recommend staging patients into mild, moderate and severe disease to guide management (Table 84-13).
TABLE 84-13 ■ CLASSIFI CATION OF COLON ISCHEMIA (CI) SEVERITY AND MANAGEMENT
Management includes stabilization of the patient, optimization of cardiac function, and bowel rest. Systemic antibiotics are administered in moderate or severe cases. Systemic glucocorticoids are of no proven value and increase the risk of perforation.
Colonic Obstruction
Colonic obstruction results in dilation of the colon, abdominal distention and in some cases, colonic perforation. The majority of colonic obstructions are the result of mechanical obstruction from cancer, volvulus, stricture, impacted stool, surgical adhesion, or bowel intussusception. While patients
of any age can develop obstruction, the condition is more common in older patients due to increased prevalence of underlying conditions that predispose to obstruction. Patients with acute colonic obstruction can develop megacolon, the diagnosis of which is based on a cecal diameter of 12 cm or greater. Cecal distension is critical, because the cecum is the part of the colon that is most susceptible to ischemia and perforation. With obstruction, as fluid and gas accumulate in the colon and intraluminal pressure increases, the radius of the colon increases. Wall tension is the greatest, and hence the risk for perforation most acute, at the area of greatest radius, which is generally in the cecum.
Acute colonic pseudo-obstruction Acute colonic pseudo-obstruction, also known as Ogilvie syndrome, usually presents as intestinal ileus with massive bowel dilation postoperatively or in the setting of a severe intercurrent illness. The mechanism is a relative increase in sympathetic neuronal inhibitory motor input that results in colonic ileus and acute colonic pseudo-obstruction.
Clinical feature s Acute colonic pseudo-obstruction usually presents in patients with severe underlying illness such as stroke, myocardial infarction, sepsis, or after surgical procedures. It is most common in older people after abdominal surgery or orthopedic procedures of the pelvis, hips, or knees.
The presentation of acute colonic pseudo-obstruction may be subtle and variable, although the most characteristic clinical feature is severe abdominal distention and failure to pass flatus or stool. Some patients report only mild distention and minimal pain. Indeed, a high level of suspicion is necessary to make the diagnosis, because patients often have perioperative bowel cleansing prior to surgery and early passage of stool is not expected.
The hallmark of the disease is colonic dilation on standard abdominal radiography. The entire colon can be affected, although in some cases just the right-sided segments can be dilated. The presence of air in the rectum implies that there is no mechanical obstruction, and is therefore important to note before making a diagnosis of acute colonic pseudo-obstruction.
Management Initial management of acute colonic pseudo-obstruction involves correcting reversible causes of colonic ileus such as electrolyte imbalances, hypoxemia, hypovolemia, and removal of medications that can exacerbate the problem. The vast majority of patients are successfully treated with these relatively simple measures. Bowel rest and intravenous hydration are imperative.
Neostigmine, a cholinesterase inhibitor, given in doses of 1 to 2 mg IV or SC is effective in patients with acute colonic pseudo-obstruction. There are several relative contraindications to the use of neostigmine listed in Table
84-14. Following neostigmine administration, patients should be monitored closely. A second administration of neostigmine can be attempted if there is partial or no response to the first trial.
TABLE 84-14 ■ CONTRAINDICATIONS TO USE OF NEOSTIGMINE IN ACUTE COLONIC PSEUDO-OBSTRUCTION
In selected patients who fail conservative and medical management, colonoscopic decompression of the unprepped bowel can be attempted. However, care is required since during colonoscopy air is insufflated into an already dilated colon and the risk of colon perforation is increased. Surgical decompression, sometimes via placement of a cecostomy tube, remains another option for patients who do not respond to medical and endoscopic interventions.
The overall prognosis of patients with acute colonic pseudo-obstruction is poor, with an in-hospital mortality approaching 30%, attributable primarily to the severity of the underlying illness. The most significant complication of acute dilatation is colonic perforation, which occurred in 3% of cases in one retrospective series.
Lower Gastrointestinal Bleeding
Lower GI bleeding is defined as that which arises distal to the ligament of Treitz. Lower GI bleeding occurs less frequently and usually is less severe than upper GI bleeding. The incidence of lower GI bleeding increases significantly with age. The majority of lower GI bleeding in older adults is
the result of diverticula, vascular ectasias, and CI, but there are many other causes (Table 84-15). This section will focus on the approach to the patient with lower GI bleeding and bleeding from vascular ectasias.
TABLE 84-15 ■ CAUSES OF LOWER GI HEMORRHAGE
Acute lower GI bleeding presents with bright red blood per rectum, hematochezia, or melena depending on the location of the bleeding. Bright red blood per rectum usually indicates a distal colonic source or rapidly bleeding upper source. Melena usually indicates a right-sided colonic lesion or a source in the upper GI tract.
The first goal in the management of a patient with lower GI bleeding is resuscitation and hemodynamic stabilization. This may include administration of crystalloid intravenous fluids and blood products. Initial testing usually includes complete blood count, blood chemistry, coagulation profile, and blood type and cross-match, and the results help guide further management. For example, a low mean corpuscular volume often is a sign of
chronic blood loss; a BUN-to-creatinine ratio of greater than 20:1 usually indicates an upper GI source; an elevated INR requires consideration of reversal in the face of hemodynamically significant bleeding. Older patients are more susceptible to complications from hypotension and anemia, and rapid stabilization and monitoring are essential. There should be a low threshold to recommend emergency room evaluation and hospitalization of these patients.
Approximately 12% of patients thought to have lower GI bleeding have an upper GI bleeding source. It is important to exclude upper GI bleeding in patients with presumed lower GI bleeding. This can be accomplished with passage of a nasogastric tube and analysis of the gastric aspirate. Bilious fluid without blood in the nasogastric tube aspirate usually confirms the suspicion of lower GI bleeding. If an upper GI source is still in question, urgent upper endoscopy may be performed.
Although urgent upper endoscopy for the diagnosis and treatment of upper GI bleeding is predicated on sound data, urgent colonoscopy in lower GI bleeding has been practiced less consistently. Colonoscopy has the advantage of allowing for the diagnosis and immediate treatment of actively bleeding lesions. A number of reports have shown that “urgent colonoscopy” is safe and yields a specific diagnosis in a high proportion of older patients with lower GI bleeding. On the basis of a high diagnostic yield, low rate of complications, and theoretical therapeutic potential, urgent colonoscopy following a rapid colonic purge has been recommended as the diagnostic procedure of choice in most patients with hemodynamically significant lower GI bleeding.
Other diagnostic tests in patients with active lower GI bleeding include scintigraphy (nuclear tagged red blood cell scans), CT angiography, and conventional angiography. Approximately 45% of patients with lower GI bleeding have positive red blood cell scintigraphy. Tagged red blood cell scans can detect bleeding at a rate greater than 0.1 mL/min and are useful to localize the site of bleeding, but unfortunately offer no option for therapy. If a patient has a positive bleeding scan or CT angiography, traditional angiography with selective embolization can be performed to attempt to stop the bleeding. In order to detect active bleeding, angiography requires a higher rate of bleeding than scintigraphy, 0.5 mL/min compared to 0.1 mL/min. Transcatheter embolization of a lower GI bleeding source is effective in 70% to 90% of patients. If bleeding cannot be stopped with
angiography, surgery to remove the bleeding colonic segment may be necessary. If a specific bleeding site can be localized with the above studies, a limited surgical resection can be performed rather than a subtotal colectomy.
Vascular ectasias Vascular ectasias, which arise from an age-related degeneration of previously normal blood vessels, typically occur in the cecum and proximal ascending colon. Along with diverticular bleeding, they are responsible for the majority of significant lower GI bleeding episodes in older adults. Ectasias are found in up to 25% of persons older than 60 years who do not have symptoms; they typically are multiple and less than 5 mm in diameter. Vascular ectasias probably arise as a result of repeated episodes of incomplete, low-grade obstruction of submucosal veins caused by increased tension in the colonic wall. The ultimate result is tortuosity and dilation of the venules and the arteriolar–capillary unit that feeds it, resulting in a small arteriovenous communication (Figure 84-5).
FIGURE 84-5. Proposed concept of the development of cecal vascular ectasias. A. Normal state of vein perforating muscular layers. B. With muscle contraction or increased intraluminal pressure, the vein is partially obstructed. C. After repeated episodes over many years, the submucosal vein becomes dilated and tortuous. D. Later, the veins and venules draining into the abnormal submucosal vein become similarly dilated and tortuous. E. Ultimately, the capillary ring becomes dilated, the precapillary sphincter becomes incompetent, and a small arteriovenous communication is present through the ectasia.
Lower GI bleeding caused by a vascular ectasia may be clinically indistinguishable from diverticular bleeding and is characterized by painless hematochezia. Bleeding from vascular ectasias may be hemodynamically significant, and a variety of treatment options exists including electrocoagulation, injection therapy, heater probe application, or argon plasma coagulation.
ACKNOWLEDGMENT
Many thanks to Richard Saad, MD, for his review and very helpful comments and to David A. Greenwald, MD, for his contributions to the chapter Common Large Intestinal Disorders in earlier editions of this book. Material from that chapter, including tables and figures, has been incorporated into this one.
FURTHER READING
Asempa TE, Nicolau DP. Clostridium difficile infection in the elderly: an update on management. Clin Interv Aging. 2017;12:1799–1809.
Brandt LJ, Feuerstadt P, Longstreth GF, Boley SJ. American College of Gastroenterology clinical guideline: epidemiology, risk factors, patterns of presentation, diagnosis, and management of colon ischemia (CI). Am J Gastroenterol. 2015;110(1):18–44.
Broens RM, Pennickx FM. Relation between anal electrosensitivity and rectal filling sensation and the influence of age. Dis Colon Rectum. 2005;48:127–133.
Comparato G, Pilotto A, Franze A, et al. Diverticular disease in the elderly.
Dig Dis. 2007;25:151–159.
Drekonia D, Reich J, Gezahegn S, et al. Fecal microbiota transplantation for
Clostridium difficile infection: a systematic review. Ann Intern Med. 2015;162:630–638.
Gispert JP, Chaparro M. Systematic review with meta-analysis: inflammatory bowel disease in the elderly. Aliment Pharmacol Ther. 2014;39:459–477.
Greenwald DA, Brandt LJ, Reinus JF. Ischemic bowel disease in the elderly.
Gastroenterol Clin. 2001;30(2):445–473.
Hall KE, Proctor DD, Fisher L, Rose S. AGA future trends committee report: effects of aging of the population on gastroenterology practice, education and research. Gastroenterology. 2005;129:1305–1338.
Jafri SM, Monkemuller K, Lukens FJ. Endoscopy in the elderly: a review of the efficacy and safety of colonoscopy, esophagogastroduodenoscopy, and endoscopic retrograde cholangiopancreatography. J Clin
Gastroenterol. 2010;44:161–166.
Jain A, Vargas HD. Advances and challenges in the management of acute colonic pseudo-obstruction (Ogilvie syndrome). Clin Colon Rectal Surg. 2012;25: 37–45.
McDonald CL, Gerding DN, Johnson S, et al. Clinical practice guidelines for Clostridium difficile infection in adults and children: 2017 Update by the Infectious Diseases Society of America (IDSA) and Society for Healthcare Epidemiology of America (SHEA). Clin Infect Dis.
2018;66(7):e1–e48.
Nguyen GC, Smalley WE, Vege SS, Carrasco-Labra A; Clinical Guidelines Committee. American gastroenterological association Institute guideline on the medical management of microscopic colitis. Gastroenterology.
2016;150(1):242–246.
Pardi DS. Microscopic colitis. Clin Geriatr Med. 2014;30: 55–65.
Peery AF, Shaukat A, Strate LL. AGA Clinical practice update on medical management of colonic diverticulitis. Gastroenterology.
2021;160(3):906–911.e1.
Rezapour M, Stollman N. Diverticular disease in the elderly. Curr Gastroenterol Rep. 2019;21(9):46.
Schembri J, Bonello J, Christodoulou DK, Katsanos KH, Ellul P. Segmental colitis associated with diverticulosis: is it the coexistence of colonic diverticulosis and inflammatory bowel disease? Ann Gastroenterol 2017;30(3): 257–261.
Shaukat A, Kahi CJ, Burke CA, Rabeneck L, Sauer BG, Rex DK. ACG clinical guidelines: colorectal cancer screening 2021. Am J
Gastroenterol. 2021;116(3):458–479.
Stepaniuk P, Bernstein CN, Targowink LE, et al. Characterization of inflammatory bowel disease in elderly patients: a review of epidemiology current practices and outcomes of current management strategies. Can J Gastroenterol Hepatol. 2015;29:327–333.
Ten Hove JR, Shah SC, Shaffer SR, et al. Consecutive negative findings on colonoscopy during surveillance predict a low risk of advanced neoplasia in patients with inflammatory bowel disease with long- standing colitis: results of a 15-year multicentre, multinational cohort study. Gut. 2019;68(4):615–622.
Tran V, Limketkai BN, Sauk JS. IBD in the elderly: management challenges and therapeutic considerations. Curr Gastroenterol Rep.
2019;21(11):60.
Travis AC, Plevsky D, Saltzman JR. Endoscopy in the elderly. Am J Gastroenterol. 2012;107:1495–1501.
Wang YR, Cangemi JR, Loftus EV Jr, Picco MF. Rate of early/missed colorectal cancers after colonoscopy in older patients with or without inflammatory bowel disease in the United States. Am J Gastroenterol. 2013;108(3):444–449.
Chapter
85
Upper Gastrointestinal Disorders
Alberto Pilotto, Marilisa Franceschi
INTRODUCTION
Upper gastrointestinal disorders (UGIDs) are highly prevalent in the aging population and may influence greatly nutrition, general well-being, and quality of life of older people. This chapter discusses the pathophysiology, clinical features, diagnostic approaches, and treatments of UGIDs, focusing on gastroesophageal reflux disease (GERD), peptic ulcer disease (PUD) including considerations on Helicobacter pylori infection and its eradication, and upper gastrointestinal bleeding (UGIB). Particular attention is given to pharmacology of UGIDs, both as a causal mechanism, as in the use of nonsteroidal anti-inflammatory drugs (NSAIDs) alone or in combination with anticoagulants/antiplatelet agents, but also in the treatment of acid-related disorders. Special attention is given to situations when deprescription of widely used antisecretory drugs, mainly proton pump inhibitors, is warranted. UGIDs may present insidiously in older adults, who often have nonspecific symptoms, independently of whether esophageal and/or gastroduodenal lesions are demonstrated on instrumental diagnostic evaluations. Since UGIDs often require an interdisciplinary approach including endoscopists, radiologists, gastroenterologists, and surgeons, scenarios when referral to a specialist is recommended are also described.
GASTROESOPHAGEAL REFLUX DISEASE
Definition
GERD is defined by symptoms and/or histopathologic alterations (esophagitis) caused by reflux of gastric contents into the esophagus.
Manifestations of GERD range from mild episodes of heartburn and acid regurgitation without esophagitis, commonly defined as nonerosive reflux disease (NERD), to chronic mucosal inflammation with erosive esophagitis and ulceration, complicated in severe cases by stricture and bleeding.
Epidemiology
GERD is a frequent condition with worldwide distribution. The prevalence of GERD is estimated to be 12% in Australia, 7.8% in East Asia, 8.8% to 26% in Europe, 8.7% to 33% in the Middle East, 18% to 28% in North America, and 23% in South America. A large US nationwide cohort study reported that the prevalence of GERD is increasing overall, but seemingly this increase is not seen in subjects older than 70 years, whose proportion of GERD patients remains stable around 15% during the time period from 2006 to 2016. Indeed, GERD prevalence remains higher amongst subjects older than 70 years, but as prevalence of GERD in those age 30 to 39 is increasing, the proportion of GERD patients older than 70 years is decreasing.
Learning Objectives
Identify the presentation of gastroesophageal reflux disease (GERD) in older adults and select the most appropriate therapeutic strategies.
Employ management strategies that reduce the risk of peptic ulcer disease (PUD) in older adults including limiting the use of nonsteroidal anti-inflammatory drugs and eradicating H pylori when indicated.
Key Clinical Points
1. GERD is common in older adults with an incidence of 5 per 1000 person years in the United Kingdom and United States. Symptoms may not correlate with the severity of esophageal findings on endoscopy so a trial of proton pump inhibitors (PPIs) should rarely be done without endoscopy for suspected GERD in
Ascertain the mechanisms that ensure proper use of proton pump inhibitors (PPI) in preventing PUD and UGIB with appropriate discontinuation of PPI therapy when not needed, making use of good clinical practice guidelines including multidimensional approach and noninvasive gastric function tests.
older adults. PPIs are more effective than H2-blockers, but prokinetic agents have not been shown to be superior to placebo.
The prevalence of PUD increases with age, likely due to H pylori infection, use of mucosa-damaging drugs, and an imbalance between mucosal erosive and protective factors. Treatment of H pylori reduces PUD relapse, but many older adults go untested and untreated. H pylori eradication after treatment should be documented with a breath test or stool antigen.
Upper GI bleeding (UGIB) occurs in older adults due to the same pathologies as seen in young adults, but may present with exacerbation of underlying disease (eg, cardiac disease) or non- GI symptoms such as syncope.
PPIs interact with many drugs prescribed for older adults, and long-term PPI use has been associated with multiple adverse outcomes, especially in frail older adults. Thus, long-term PPI use should be avoided unless there are clear indications, and deprescription should be enacted whenever possible.
While it is still unclear whether the incidence and the prevalence of GERD symptoms increase with advancing age, the frequency of esophagitis is higher in older versus younger adults. Indeed, older age was found to be a significant risk factor in the development of severe forms of GERD in both epidemiologic and clinical studies from the United States, Japan, and Europe. Consistently, GERD was the sixth most common disorder in a retrospective cross sectional study of almost 20,000 long-term care residents of nursing homes older than 65 years, with an overall prevalence of 23%. In summary, GERD is a highly prevalent disorder in older adults, and associated with more severe and advanced disease than in young adults.
Pathophysiology
Pathophysiologic changes in esophageal function that occur with aging, the so-called presbyesophagus, are summarized in Table 85-1 and may be responsible, at least in part, for the high prevalence of GERD in old age.
Older patients have a high prevalence of other risk factors that predispose the aging esophagus to lesions (also see Table 85-1): (1) difficulty in maintaining an upright position after meals due to the modifications of the thorax anatomy linked to the dorsal kyphosis and collapse of the dorsal vertebrae; and (2) hiatus hernia associated with both repeated episodes of acid reflux and with more severe diseases such as Barrett esophagus. Older adults often use drugs that may have a directly damaging effect on esophageal mucosa or an indirect effect on reducing lower esophageal sphincter (LES) pressure (Table 85-2). Delayed esophageal transit time of many drugs in older people creates a potentially dangerous situation when it coexists with acid reflux, as reported for alendronate and NSAIDs.
TABLE 85-1 ■ PATHOPHYSIOLOGIC CHANGES AFFECTING THE ESOPHAGUS WITH AGING
TABLE 85-2 ■ DRUGS THAT MAY INCREASE THE RISK OF SEVERE GERD
Presentation
Particular attention has been given to the clinical presentation of GERD in older adults since important differences have been reported when compared to younger individuals. Older patients report fewer typical symptoms of heartburn, acid regurgitation, and epigastric pain (Figure 85-1). Atypical symptoms also are relatively rare in older adults (Table 85-3). In contrast, the prevalence of nonspecific symptoms of vomiting, anorexia, weight loss, and anemia increase with age (Figure 85-2). Reflux esophagitis in older adults may thus be missed, and a substantial number of patients may suffer subclinical relapses of the disease. The cause of such a different clinical expression of GERD in older adults is not clear, but a diminished sensitivity to visceral pain has been documented in older adults. Moreover, 24-hour esophageal pH monitoring and endoscopy examinations demonstrate an age- related reduction in acid chemosensitivity and a reduced symptom severity despite a tendency toward increased severity of esophageal mucosal injury.
TABLE 85-3 ■ TYPICAL AND ATYPICAL SYMPTOMS OF GASTROESOPHAGEAL REFLUX DISEASE
FIGURE 85-1. Prevalence of typical symptoms in 840 subjects with reflux esophagitis divided according to age. (Adapted with permission from Pilotto A, Franceschi M, Leandro G, et al.
Clinical features of reflux esophagitis in older people: a study of 840 consecutive patients. J Am Geriatr Soc. 2006;54[10]:1537–1542.)
FIGURE 85-2. Prevalence of nonspecific symptoms in 840 subjects with reflux esophagitis divided according to age. (Adapted with permission from Pilotto A, Franceschi M, Leandro G, et al. Clinical features of reflux esophagitis in older people: a study of 840 consecutive patients. J Am Geriatr Soc. 2006;54[10]:1537–1542.)
Evaluation
Since the presentation of GERD is often nonspecific in older adults, endoscopy should be undertaken early as the initial diagnostic test in any older patient suspected of having GERD. Early endoscopy is very useful in diagnosing the presence and the grade of severity of esophagitis (Figure 85- 3) and/or hiatus hernia, which are important prognostic factors to be considered in long-term treatment, especially if the hernia is greater than 3 cm. Endoscopy also identifies GERD complications, especially esophageal strictures and Barrett esophagus, and concomitant gastroduodenal diseases including gastric or duodenal ulcers and/or H pylori infection. Barium radiography of the esophagus is a useful test to establish the presence of a hiatus hernia and is indicated as part of the evaluation of the patient with
suspected motility abnormalities or peptic stricture. A barium study may also identify rings, webs, or other obstructive lesions. The barium swallow test is also a key test in studying older patients with dysphagia, and it should be performed in conjunction with endoscopy in all older patients with this symptomatology. Moreover, coupled with videofluoroscopic swallowing studies, barium swallow (esophagram) allows the identification of Zenker diverticulum and cricopharyngeal dysfunction. Barium studies are widely available and usually well tolerated by older people. Esophageal 24-hour
pH testing is helpful before antireflux surgery and in those patients not responsive to medical treatment. In the patient with persistent symptoms and a negative endoscopy, an abnormal esophageal pH test may suggest the need
for more aggressive drug therapy, whereas a normal test may indicate the presence of a functional disorder. In the patient with persistent esophagitis, a normal test could differentiate pathophysiologic mechanisms, that is, a drug- induced esophagitis from acid reflux disease or biliary reflux. Esophageal manometry is useful in identifying abnormalities of LES pressure or esophageal motility. In older patients, its major use in GERD is reserved for the localization of LES before pH testing and for obtaining preoperative information on esophageal peristalsis.
FIGURE 85-3. Endoscopic classification of esophagitis (according to the Los Angeles Grading System). (Data from Lundell LR, Dent J, Bennett JR, et al. Endoscopic assessment of oesophagitis: clinical and functional correlates and further validation of the Los Angeles classification. Gut. 1999;45[2]:172–180.)
A therapeutic trial with PPI has been suggested as a useful diagnostic test in patients with GERD. Meta-analyses of clinical studies demonstrated that an empiric PPI trial has 78% sensitivity and 54% specificity in confirming the diagnosis of GERD when compared with endoscopy or 24-hour esophageal pH monitoring. In patients with noncardiac chest pain, sensitivity and specificity are reported to be 80% and 74%, respectively. Since typical (heartburn, acid regurgitation) and extraesophageal symptoms (pulmonary, otorhinolaryngeal, noncardiac chest pain) of GERD are less frequent in older than in younger patients, the older patient’s history is less reliable, and there is a high prevalence of severe esophagitis despite mild symptoms; a trial of PPIs should be used with great caution in older patients, possibly only after endoscopy as a first diagnostic test.
Management
The objectives of GERD treatment are (1) relief of symptoms, (2) esophagitis healing, (3) prevention of relapse, and (4) prevention of complications.
Nonpharmacological Although there is clinical and physiologic evidence that smoking, alcohol, chocolate, peppermint, coffee, fatty, or citrus intake may adversely affect symptoms or esophageal pH, there is little evidence that cessation of these agents will improve GERD variables. Elevation of the head of the bed and weight loss, however, have been associated with improvement in GERD variables in case-control studies. Medications that decrease LES pressure and promote gastroesophageal reflux as well as drugs that may cause direct esophageal injury (see Table 85-2) should be avoided when possible or used with caution in older patients with GERD. In any case, these drugs must be taken while maintaining an upright position and with a full glass of water.
Drugs
Short-Term Therapy Antacids and alginic acid provide symptomatic relief in mild, nonerosive esophagitis. Possible side effects of antacids include salt overload, constipation, hypercalcemia, and interference with the absorption of other drugs, particularly antibiotics such as tetracycline, azithromycin, and quinolones. Caution is warranted in patients with chronic kidney disease or liver failure, not only because of reduced clearance but also because of modified electrolyte metabolism.
Prokinetic drugs, including metoclopramide, clebopride, domperidone, and levosulpiride, either alone or in combination with antisecretory drugs, are only moderately effective in GERD. Unfortunately, no controlled clinical trials have evaluated the role of these drugs specifically in the treatment of GERD in older adults. However, older patients treated with prokinetic drugs are more likely than younger subjects to have adverse events such as tardive dyskinesia, mental confusion, drowsiness, and age-related renal dysfunction, which may require dose reduction and close clinical monitoring. Thus, antireflux therapy is focused largely on suppressing gastric acid secretion with H2-blockers and PPIs.
A series of meta-analyses of 34 trials including 1314 participants demonstrated that PPIs were more effective than H2-blockers and prokinetics in relieving heartburn both in patients with GERD who were treated
empirically and in patients with endoscopy-negative reflux disease (ENRD) (Figure 85-4). Moreover, a meta-analysis of 134 trials involving 35,978 patients with esophagitis demonstrated benefit for standard dose PPI therapy and H2-blockers but not prokinetics compared to placebo in healing of
esophagitis. In addition, 26 trials evaluating 4032 participants reported that there was benefit for PPI therapy compared to H2-blockers or H2-blockers plus prokinetics in healing of esophagitis. Thus, PPI is the most effective
therapy for heartburn relief and healing of esophagitis, and H2-blocker
therapy is also effective. However, the effectiveness of prokinetics over placebo is marginal (Figure 85-5).
FIGURE 85-4. Efficacy of short-term use of proton pump inhibitors (PPI), H2-receptor antagonists (H2RA), and prokinetics in adults with GERD treated empirically (left) and in those with endoscopy-negative reflux disease (ENRD) (right). (Data from Sigterman KE, van Pinxteren B, Bonis PA, et al. Short-term treatment with proton pump inhibitors, H2-receptor antagonists and prokinetics for gastro-oesophageal reflux disease-like symptoms and endoscopy negative reflux disease. Cochrane Database Syst Rev. 2013;31[5]:CD002095.)
FIGURE 85-5. Effectiveness of proton pump inhibitors (PPIs), H2-receptor antagonists (H2RAs), prokinetic therapy, sucralfate, and placebo in healing esophagitis. (Data from Khan M, Santana J, Donnellan C, et al. Medical treatments in the short term management of reflux oesophagitis. Cochrane Database Syst Rev. 2007;[2]:CD003244.)
Currently, six PPIs are available: omeprazole, lansoprazole, dexlansoprazole (lansoprazole’s R-enantiomer), rabeprazole, pantoprazole, and esomeprazole. Some age-associated differences in pharmacokinetics and pharmacodynamics of PPIs have been reported. However, it is unknown if these differences are associated with different clinical effects, particularly in older patients. Indeed, a series of meta-analyses evaluating acute therapy of esophagitis reported that the PPIs were superior to ranitidine and placebo in healing erosive esophagitis, without differences in efficacy between omeprazole 20 mg daily and lansoprazole 30 mg daily or pantoprazole 40 mg daily or rabeprazole 20 mg daily. A systematic review and meta-analysis of randomized controlled trials comparing effectiveness and acceptability of the FDA-licensed PPIs for esophagitis reported that esomeprazole 40 mg provided higher healing rates than omeprazole 20 mg, lansoprazole 30 mg, pantoprazole 40 mg, and rabeprazole 20 mg at 4 weeks and omeprazole 20 mg, lansoprazole 30 mg, and rabeprazole 20 mg at 8 weeks (Figure 85-6). In terms of acceptability, only dexlansoprazole 60 mg had significantly more
all-cause discontinuation than omeprazole 20 mg, pantoprazole 40 mg, and lansoprazole 30 mg (Figure 85-7).
FIGURE 85-7. Acceptability (risk of discontinuation) of the FDA-licensed proton pump inhibitors for erosive esophagitis. (Data from Li MJ, Li Q, Sun M, et al. Comparative effectiveness and acceptability of the FDA-licensed proton pump inhibitors for erosive esophagitis: a PRISMA-compliant network meta-analysis. Medicine (Baltimore).
2017;96[39]:e8120.)
FIGURE 85-6. Healing rates of erosive esophagitis at 4 and 8 weeks. (Data from Li MJ, Li Q, Sun M, et al. Comparative effectiveness and acceptability of the FDA-licensed proton pump inhibitors for erosive esophagitis: a PRISMA-compliant network meta-analysis. Medicine
(Baltimore). 2017;96[39]:e8120.)
Long-Term Maintenance Therapy GERD is a chronic disease with a 70% to 90% annual relapse rate after the interruption of an effective antisecretory regimen. Risk factors for relapse are shown in Table 85-4. Maintenance therapy with antisecretory drugs reduces relapse of GERD in older patients. Also PPIs have higher efficacy than H2-blockers and prokinetics in
maintaining mucosal healing after an episode of esophagitis. There are two main approaches to maintenance drug therapy for GERD: step-up and step- down. In the step-up approach, therapy is initiated with weak inhibition of gastric acid (eg, an H2-blocker or half dosage of a PPI), and progresses to a
higher degree of acid inhibition (standard and then escalating doses of PPI), until adequate symptom control is obtained. The step-down approach involves starting with the most effective regimen (full dosage of a PPI) and switching to lower doses of PPI for maintenance therapy once symptoms are under control. This latter approach is perhaps more rational, based on evidence showing superior efficacy of PPIs over H2-blockers across all
grades of severity of GERD. Withdrawal of maintenance therapy with PPI after 6 months reduces the remission rate of esophagitis at 1 year from 95% to 33% in patients older than 65 years (Figure 85-8). Presently, no comparative studies have been carried out to evaluate which strategy (step-
down vs step-up) is more cost-effective in older subjects. As chronic PPI use without reassessment contributes to polypharmacy and puts subjects at risk of experiencing drug interactions and adverse events (see below), deprescribing should be a goal of treatment. A Cochrane review, however, found that although effectively reducing costs and medication intake, deprescription strategies including on-demand PPI intake versus continuous indefinite use, are associated with patient dissatisfaction due to symptomatic disease relapse. Unfortunately, studies carried out specifically in older people are lacking. Figure 85-9 summarizes the therapeutic schemes for short-term and long-term PPI treatment in older people with GERD and NERD.
TABLE 85-4 ■ RISK FACTORS FOR RELAPSE OF ESOPHAGITIS
FIGURE 85-8. Remission at 12 months in older patients with esophagitis who continued versus stopped proton pump inhibitor (PPI) maintenance treatment after 6 months. (Data from Pilotto A, Leandro G, Franceschi M, et al. Short- and long-term therapy for reflux oesophagitis in the elderly: a multi-centre, placebo-controlled study with pantoprazole. Aliment Pharmacol Ther.
2003;17[11]:1399–1406.)
FIGURE 85-9. Therapeutics schemes for short-term and long-term proton pump inhibitor (PPI) treatments in older people with gastroesophageal reflux disease (GERD) and nonerosive reflux
disease (NERD).
Safety of Long-Term Antisecretory Treatment PPIs represent a very effective class of drugs widely prescribed in all age populations, including older subjects, and often for prolonged periods of time. The potential adverse effects of long- term PPI use are widespread, as illustrated in Figure 85-10. Thus, safety evaluation of long-term treatment with PPIs is emerging as a major topic in clinical practice.
FIGURE 85-10. Possible mechanisms of adverse events associated with long-term use of proton pump inhibitors (PPIs).
PPIs and es ophageal and gas tric his tologic changes . There is presently no
evidence supporting the hypothesis that long-term PPI therapy may increase the risk of neoplastic degeneration in patients with Barrett esophagus.
Conversely, based on meta-analysis of observational studies, the long-term use of PPIs was associated with a decreased risk of esophageal adenocarcinoma and/or high-grade dysplasia in patients with Barrett esophagus. As regards PPI long-term use and the risk of gastric cancer, conflicting data have been reported. A positive association between fundic gland polyps (FGPs) and acid suppression has been reported. Although this
constitutes a frequent endoscopic finding, it is not associated with risk of neoplastic disease and lacks clinical significance.
PPIs and hypergas trinemia. The physiologic role of gastrin is to stimulate gastric enterochromaffin-like (ECL) cells to secrete histamine, which in turn stimulates the secretion of hydrochloric acid from gastric parietal cells.
Since PPIs inhibit the final step of gastric acid secretion by irreversible binding to the parietal cell H+K+ATPase (the proton pump), secondary hypergastrinemia may occur in subjects treated long-term with PPIs. Indeed, an increase in the average serum gastrin level as well as a diffuse (simple)
or linear/micronodular (focal) ECL-cell hyperplasia has been observed in patients continuously using PPIs. The clinical relevance of these findings is currently uncertain.
PPIs and iron and vitamin B12 deficiencies . While gastric acid suppression may decrease iron absorption, it remains uncertain whether iron-deficiency anemia may result from chronic PPI therapy.
Long-term use of gastric acid suppressant drugs, particularly PPIs, is associated with the development of vitamin B12 deficiency, especially in older subjects. Thus, it is appropriate to assess vitamin B12 status in long-
term users of PPIs, especially in older subjects who may have poor nutrition (and therefore low vitamin B12 intake) and/or in patients requiring life-long PPI treatments, such as those with Zollinger-Ellison syndrome.
PPIs and magnes ium. Long-term PPI use can be associated with a subclinical magnesium deficiency, as has been observed in hospitalized adult patients. Thus, the FDA has suggested monitoring serum magnesium levels in patients on PPI therapy.
PPIs , bone metabolis m, os teoporos is , and ris k of fractures . Antisecretory drugs
may interfere with calcium absorption through induction of hypochlorhydria; PPIs, moreover, may reduce bone resorption through inhibition of the osteoclastic vacuolar proton pump. Indeed, long-term PPI therapy has been associated with an increased risk of hip, wrist, or spine fractures. The risk of fracture is higher in patients who receive multiple daily doses of PPIs and when therapy is prolonged for a period of 1 year or longer. Additional risk factors for osteoporosis, such as old age and female gender, may also contribute to increase the risk of fractures in long-term acid suppression drug users.
PPIs and enteric infections . Inhibition of acid secretion may induce an increase of the bacterial flora at the gastrointestinal level with potential to increase the incidence of systemic infections particularly in immune depressed older subjects. Indeed, the use of antisecretory drugs has been associated with an increased risk of infections of the lower gastrointestinal tract, mainly because of Salmonella species and Clostridium difficile. In a large population-based study, subjects treated with acid-lowering drugs had a threefold higher risk of developing bacterial diarrhea than those taking hypotensive or antiasthmatic medications. In older patients this susceptibility is increased, particularly for Salmonella and Campylobacter infections.
Some data suggest that widespread use of PPIs, especially in the hospital setting, may contribute to the current in-hospital epidemic of C difficile infections. Indeed, a meta-analysis of 56 studies involving 356,683 patients provided strong evidence that PPI use is associated with a double risk for development of C difficile infection without differences in subgroup analyses for age less than 65 years versus equal or more than 65 years.
PPIs and community-acquired pneumonia. A series of observational studies have suggested that the use of PPIs could increase the risk of hospitalization for community-acquired pneumonia (CAP), particularly in older subjects. The potential presence of confounding factors, however, seems to limit the conclusions of these studies. Indeed, meta-analyses have demonstrated that PPI and H2-blocker long-term use is not associated with an increased risk of
hospitalization for CAP.
PPIS AND COVID-19. The recent pandemic due to SARS-CoV-2 has prompted examination of PPI use and risk of adverse outcomes of COVID-19. Two meta-analyses found that PPI use is associated with higher risk for becoming infected with COVID-19, but not significantly (5 studies; odds ratio [OR], 1.33) and with significantly higher risk for severe outcomes of COVID-19, including intensive care unit admission or death (9 studies; OR, 1.67). In another meta-analysis, PPI use was associated with significantly higher risk for severe outcomes of COVID-19 (3 studies; OR, 1.46) and for developing secondary infections (2 studies; OR, 2.91). Although a causal relation between PPI use and susceptibility to COVID-19 or worse COVID-19 outcomes has not been proven, a plausible biological mechanism might involve facilitated gut viral entry due to abated acidic gastric production.
PPIs and microbioma/microbiota. As it is becoming more evident that gut microbiota is markedly different at different life stages and in health versus in the presence of diseases, the association between medications and the microbiome is being explored. It is becoming clear that greater diversity of gut microbiota is associated with good health, and that the predominance of certain bacterial species over others is a hallmark of certain conditions.
Among older hospitalized patients compared to healthy subjects, polypharmacy was significantly associated with gut microbiota dysbiosis, that is, reduction in species richness and significant variations in the average relative abundance of a large number of taxa, including Helicobacter.
Importantly, dysbiosis also exhibited a significant association with mortality at follow-up. Healthy-active older subjects without polypharmacy did not exhibit dysbiosis. Of note, among specific drug classes, PPIs, antipsychotics, and antidepressants had the strongest associations with gut microbiota composition.
PPIs , frailty, and mortality. Chronic use of PPIs has been reported to be associated with a higher risk of functional decline in frail older subjects. Moreover, in older patients discharged from acute care hospitals, the use of high-dose PPIs was associated with increased 1-year mortality. Although further studies are needed, these data suggest caution when continuing PPIs following an episode of hospitalization in older adults.
Bioavailability and metabolis m of other agents . Due to the profound and long- lasting elevation of intragastric pH, it is not surprising that PPIs interfere with the absorption of concurrent medications, as drug solubility may be substantially reduced at neutral pH compared with acidic conditions. PPIs reduce the bioavailability of many drugs (eg, ketoconazole, atazanavir) by 50% or more compared with the control values. Moreover, omeprazole has been associated with 30% and 10% reductions in systemic clearance of diazepam and phenytoin, respectively. Since PPIs are mainly metabolized by the cytochrome P450 (CYP), particularly the subfamily CYP2C19, clinically relevant drug interactions may occur in case of concomitant administration of CYP2C19-metabolized drugs.
Due to omeprazole’ and esomeprazole’s competitive interference with the conversion of clopidogrel to its active metabolite through the CYP2C19 pathway, their concomitant use is discouraged. Although evidence of safety regarding concomitant use of clopidogrel and other PPIs including
pantoprazole, rabeprazole, and lansoprazole is still lacking, at present these drugs are not contraindicated in patients on antiplatelet therapy, but caution is warranted, and an individualized approach which assesses the real need for PPI therapy is recommended. Drug interactions have also been described between PPIs and warfarin, phenytoin, ledipasvir, sofosbuvir, methotrexate, digoxin, nelfinavir, and rilpivirine, amongst others.
Overutilization of acid inhibitors. Acid inhibitors are among the most commonly used pharmaceuticals. United States trends in PPI prescriptions indicate that PPI users increased from 5.70% in 2002–2003 to 6.73% in 2016–2017, especially in adults aged 65 and older, and those who were obese. A previous study of 946 PPI users showed that 35% were prescribed PPI for a documented upper GI disorder, 10% for empirical symptomatic treatment, and 18% for gastric protection while on NSAIDs or antiplatelet drugs, while up to 36% of subjects had no documented appropriate indication for PPI therapy. Other studies have reported that PPIs are frequently prescribed during hospitalization, especially in older adults, and that as many as 50% of such prescriptions may be continued after discharge without clear indications. All these data suggest that older patients on long-term PPI therapy should be periodically evaluated for the indications for continued therapy.
The role of surgery The role of surgery in the treatment of GERD is controversial. Laparoscopic fundoplication has greatly reduced the morbidity and mortality of antireflux surgery, and this is also true for older adults.
Indications for surgery are evolving; at present, evidence suggests that surgery may be indicated in older patients who (1) are medical treatment failures; (2) have severe complications, such as strictures not treatable by endoscopy; (3) have severe dysphagia, aspiration, or atypical symptoms associated with a large hiatal hernia; and/or (4) have preneoplastic lesions, that is, Barrett esophagus with high-grade dysplasia. Randomized clinical studies are needed to compare the outcome of antireflux surgery with that of medical therapy in older adults. Furthermore, surgery for GERD should be centralized to units specialized in these techniques to reduce surgical complications and improve successful clinical outcomes.
PEPTIC ULCER DISEASE
Definition
Peptic ulcer is a break of the mucosa lining the stomach or the duodenum. According to their anatomical location, peptic ulcers are divided into gastric ulcers, that is, peptic ulcers of the gastric fundus, body, or antrum; prepyloric and pyloric ulcers, that is, located within 3 cm from the pyloric ring and in the pyloric ring, respectively; and duodenal ulcers, that is, located in the bulb or in the second portion of the duodenum (Figure 85-11).
FIGURE 85-11. Anatomical location of peptic ulcers.
Epidemiology
The prevalence of PUD worldwide ranges from 0.1% to 4.7%, with an annual incidence ranging from 0.19% to 0.3%. Duodenal ulcers are most common in Western populations, and gastric ulcers are more frequent in Asia. Overall, the prevalence and incidence of PUD are declining, but the incidence of PUD and its complications increases with advancing age (Figure 85-12). Indeed, the rates of hospitalization for complicated peptic ulcer (Figure 85-13) and mortality rates for upper GI complications (Figure 85-14) remain very high in older patients.
FIGURE 85-12. Age- and gender-specific annual incidence of peptic ulcer disease (A) and severe complications (B) (per 100,000) during 2000 to 2008 in Finland. (Adapted with permission from Malmi H, Kautiainen H, Virta LJ, et al. Incidence and complications of peptic ulcer disease requiring hospitalisation have markedly decreased in Finland. Aliment Pharmacol Ther. 2014;39[5]:496–506.)
FIGURE 85-13. Trends in hospitalizations for peptic ulcer disease in United States according to age. (Data from Feinstein LB, Holman RC, Yorita Christensen KL, Steiner CA, Swerdlow DL. Trends in hospitalizations for peptic ulcer disease, United States, 1998–2005. Emerg Infect Dis. 2010;16[9]:1410–418.)
FIGURE 85-14. Mortality rates (%) for upper GI complications in the different age groups in United States in 2001 and 2009. (Data from Laine L, Yang H, Chang SC, et al. Trends for incidence of hospitalization and death due to GI complications in the United States from 2001 to 2009. Am J Gastroenterol. 2012;107[8]:1190–1195.)
Pathophysiology and Classification
Two main factors that might explain the observed increase in PUD in older patients are the high prevalence of H pylori infection and the frequent use of mucosa-damaging drugs, such as NSAIDs and aspirin. Nevertheless, in older subjects, around 20% of all peptic ulcers are not associated with either of these risk factors (Figure 85-15). The pathophysiology of “idiopathic” non- NSAID, non-H pylori peptic ulcers is still uncertain, but a critical disequilibrium between protective and erosive factors in the gastric or duodenal mucosa seems to be involved. Advancing age is associated with a reduction of the gastric mucosal barrier, that is, the capacity to resist external damage owing to secretion of gastric mucus, bicarbonate secretion, mucosal prostaglandins, gastric mucosal proliferation, and/or mucosal blood flow.
Moreover, in older patients with PUD, normal or high levels of gastric acid and pepsin secretions have been observed. Although advancing age is independently related to chronic atrophic gastritis and a functional status of hypo/achlorhydria, the atrophic changes of the gastric mucosa appear to be associated with H pylori infection rather than with aging. Emotional stress, smoking tobacco, and/or alcohol use are potential risk factors that may contribute to the development of PUD in predisposed subjects.
FIGURE 85-15. Prevalence of gastric and duodenal ulcer in older patients divided according to the presence of H pylori infection and/or NSAID use. (Data from Pilotto A. Helicobacter pylori-associated peptic ulcer disease in older patients: current management strategies. Drugs Aging. 2001;18[7]:487–494.)
H pylori–associated peptic ulcer disease Approximately, 70% of older peptic ulcer patients are H pylori positive (see Figure 85-15). Treatment of H pylori infection heals ulcers in more than 95% of older patients and improves symptoms in more than 85%. Moreover, the eradication of H pylori infection improves clinical outcomes, reducing ulcer recurrences and symptoms.
Health programs leading to widespread eradication of H pylori infection in symptomatic patients have reduced the prevalence of PUD, particularly of duodenal ulcer, both in Europe and in the Far East. Unfortunately, the percentage of older patients with PUD who are treated for their H pylori infection is still quite low. In one US study, only 40% to 56% of patients older than 65 years who were hospitalized for PUD were tested for H pylori infection; among the H pylori positive patients, only 50% to 73% were treated with specific antibiotic-based anti-H pylori therapy.
NSAID/aspirin-associated peptic ulcer disease In older patients, approximately 25% of duodenal ulcers and 40% of gastric ulcers are associated with the use of NSAIDs and/or aspirin (see Figure 85-15). The risk of NSAID-related peptic ulcers and their severe complications tends to increase linearly with age and becomes particularly high in the presence of disability, comorbidity, and polypharmacy. A case-control study performed in more than 3000 older patients who underwent an endoscopy documented that subjects undergoing treatment with NSAIDs and/or aspirin had a higher prevalence of gastric and
duodenal ulcers compared to nonuser control subjects. Moreover, the risk of PUD is significantly higher in acute than chronic users of NSAIDs or aspirin (Figure 85-16). The injurious gastroduodenal effects of NSAIDs and aspirin are mainly caused by the inhibition of COX-1 and its role in mucosal defense mechanisms, but also through the inhibition of thromboxane A2, which reduces platelet function, resulting in a higher risk of bleeding. However, a direct effect on gastroduodenal mucosal surface cannot be excluded especially by those NSAIDs that have a high acid/base pK ratio.
FIGURE 85-16. Prevalence of gastric ulcer and duodenal ulcer in NSAID users and NSAID nonusers. ASA, aspirin. (Data from Pilotto A, Franceschi M, Leandro G, et al. Proton-pump inhibitors reduce the risk of uncomplicated peptic ulcer in elderly either acute or chronic users of
aspirin/non-steroidal anti-inflammatory drugs. Aliment Pharmacol Ther. 2004;20[10]:1091– 1097.)
Different NSAIDs are associated with distinct GI adverse events; nevertheless, none of them shows an absolutely safe profile. Drug-related pharmacodynamic and pharmacokinetic properties may explain the individual variability observed in terms of adverse effects on the upper GI tract. A genetic predisposition due to polymorphism of cytochrome P4502C9 that reduces the metabolism of some NSAIDs with a prolonged duration of drug levels, enhancing the risk of GI mucosal damage, has been reported as a potential factor influencing the ulcerogenic effect of the different NSAIDs.
Presentation
In older adults, PUD is often difficult to diagnose, as symptoms may be atypical. The patient’s concomitant diseases and treatments may cause symptoms that mask those of the ulcer. In patients older than 60 years, only one-third suffered from typical epigastric pain, and two-thirds experienced vague abdominal pain as a main symptom. Moreover, the intensity of pain may be less severe in older subjects and therefore may not receive the full attention of the physician or may not be taken seriously by the patient.
Frequently, symptomatology includes nausea, vomiting, weight loss, and/or anorexia as the first, or even the only, symptom of PUD in older adults.
Unfortunately, the first symptom might be a severe complication, especially bleeding or stenosis. Because of the insidious clinical presentation of the disease in older patients, the consequences of PUD are more serious than those in younger subjects.
Evaluation
Upper GI endoscopy is always indicated for older subjects with new abdominal symptoms because of the high prevalence of serious gastric diseases in this age group. Upper GI endoscopy is safe and well tolerated in older adults. By direct visual identification, the location and size of an ulcer can be described. Peptic ulcer is a round to oval mucosal defect, from 5 mm to even 4 cm in diameter, with a smooth base and perpendicular borders. A series of biopsies to exclude malignancy is mandatory both in the center and on the borders of the gastric ulcer, even if they are not elevated or irregular as in ulcerative forms of advanced gastric cancer. Gastric biopsies allow for
identifying the presence and the severity of chronic gastritis and/or the presence of H pylori infection. Endoscopic healing is the gold standard for evaluating treatment success in clinical trials. The surrounding mucosa may present radial folds, as a consequence of the parietal scarring.
Barium radiography of the stomach and duodenum is indicated as part of second-line evaluation of the patient with suspected motility disorders, peptic strictures, or fistulas in which gastrografin may be used as an alternative to barium in contrast studies. Radiography is contraindicated in the presence of bleeding, severe vomiting with a high risk of pulmonary aspiration, or in cases of suspected gastric or duodenal perforation. If a peptic ulcer perforates, air will leak from the inside of the gastrointestinal tract to the peritoneal cavity. This leads to “free gas” within the peritoneal cavity that may be observed underneath the diaphragm on an erect or supine lateral abdominal x-ray.
Testing for H pylori infection Helicobacter pylori infection may be diagnosed by means of either histology evaluation, rapid urease test, or culture performed on gastric biopsies taken during endoscopy (Table 85-5). However, the biopsy site needs to be selected with care, since H pylori may only be found in the fundus or body and not in the antral mucosa of older patients who are taking antisecretory drugs. Moreover, the presence of chronic atrophic gastritis as a result of a past colonization of H pylori may be associated with a lower prevalence of the bacterium in the gastric biopsy specimens in older than in younger patients. For the same reasons, the rapid urease test performed on gastric biopsies has lower sensitivity in subjects 60 years and older compared with younger patients.
TABLE 85-5 ■ TESTING FOR H PYLORI
The presence and severity of histologically proven gastritis can be investigated through the histological evaluation of morphological parameters of the gastric mucosa at two sites. All these findings suggest that in older adults (1) it is advisable to perform gastric biopsies from both the antrum and the body of the stomach; and (2) a second test for H pylori should be performed in this age group if a urease-based or histologic test is negative.
Posttreatme nt H pylori evaluation Successful eradication should always be confirmed by a noninvasive or an invasive test if endoscopy is clinically indicated. Older patients with a diagnosis of peptic ulcer (especially gastric ulcer), gastric mucosa-associated lymphoid tissue, lymphoma, or severe gastritis should be evaluated by endoscopy and gastric mucosal histology after completion of anti-H pylori therapy. Most experts agree that this evaluation must be carried out at least 1 month after completion of therapy in order to minimize false-negative results. Older patients with mild or moderate forms of chronic gastritis may be evaluated after therapy by a
noninvasive test. The 13C-urea breath test demonstrated higher sensitivity, specificity, and diagnostic accuracy than serology (IgG anti-H pylori antibodies) in older subjects. An H pylori–stool antigen (HpSA) test for the detection of H pylori has been suggested as a valuable option with a high-
potential role in the diagnosis after eradication therapy. In hospitalized frail older patients, the HpSA was less accurate than the 13C-urea breath test, while antibiotic therapy and corpus atrophy decreased the positivity rate.
Othe r noninvasive tests Serum pepsinogen I and II levels (sPGI, sPGII), which are biomarkers of inflammation, are known to increase in the presence of H pylori-related nonatrophic gastritis. sPGII levels are higher in subjects with both gastric and duodenal ulcer, and levels correlate with the severity of inflammation. A study in older people reported that sPGII levels decreased significantly after a successful H pylori cure. Moreover, sPGI or the ratio of sPGI/sPGII measurements may be useful to identify atrophic gastritis of the gastric corpus. Assays for gastrin (particularly gastrin-17) could be an indicator of the morphological status of the antral mucosa.
Management
Eradication of H pylori infection There is currently a worldwide consensus that the first-line therapy for eradication of H pylori infection should be triple therapy with a PPI twice daily combined with clarithromycin 500 mg twice daily and amoxicillin 1 g twice daily or nitroimidazole 500 mg twice daily for a minimum of 7 days. The cumulative results of clinical trials evaluating anti-H pylori therapies in older subjects confirmed that PPI-based triple therapies for 1 week were highly effective and well tolerated. Adverse event rates of less than 13% were reported, with less than 6% of patients discontinuing therapy owing to these effects (Table 85-6). In older patients, a reduction of the dosage of the PPI from a twice to once daily standard dose did not influence cure rates of the PPI-triple therapies. Since aging may modify the pharmacokinetic distribution of clarithromycin, independent of renal function, clinical trials evaluated the efficacy of PPI-based triple therapies including clarithromycin at a low dose of 250 mg twice daily in older patients. Results demonstrated no significant differences in cure rates and tolerability between clarithromycin 250 mg versus 500 mg twice daily.
All these findings suggest that in older patients, 1-week PPI-based triple therapies should include low doses of both PPIs and clarithromycin, in combination with standard doses of either amoxicillin or a nitroimidazole, to obtain excellent cure rates and tolerability. Increasing the duration of treatment from 7 days to 14 days is associated with higher eradication rates. At present, however, no studies have evaluated the clinical usefulness of these 2-week triple therapy regimens specifically in older patients. Studies
of a 10-day sequential regimen of 5 days of PPI plus amoxicillin followed by five additional days with PPI plus clarithromycin and tinidazole have reported higher eradication rates in comparison with standard triple therapy in both geriatric and adult patients.
TABLE 85-6 ■ CUMULATIVE RESULTS OF CLINICAL TRIALS EVALUATING 1-WEEK PPI-BASED TRIPLE THERAPIES, SEQUENTIAL THERAPIES, AND QUADRUPLE THERAPIES AGAINST HELICOBACTER PYLORI INFECTION IN OLDER PATIENTS
Low compliance and antibiotic resistance are the two major reasons for treatment failure. Primary resistance to amoxicillin remains uncommon, but the frequency of clarithromycin resistance has reached rates above 70% in some countries including China, nearly 50% in Portugal and Iran, and approximately 16% in the United States. Metronidazole resistance ranges between 20% and 30%, and is more frequent among women and in developing countries. Consensus exists in using clarithromycin and metronidazole for H pylori eradication therapies when prevalence of antibiotic resistance is lower than 15% and 40%, respectively.
Because failure of therapy is often associated with secondary antibiotic resistance, retreatment should ideally be guided by data on susceptibility.
Since such information is often unavailable, the choice of a second-line treatment depends on which treatment was used initially. Indeed, eradication of H pylori infection is more difficult when a first treatment attempt has
failed, and the optimal strategy for retreatment has not yet been established in older patients. Thus, specialist referral should be made in this situation.
NSAID-related PUD
Drug Treatment NSAID- or aspirin-associated peptic ulcers usually heal after 4 to 8 weeks of treatment with a PPI. Healing rates are higher if NSAID or aspirin treatments are stopped. Presently, no consensus exists on the clinical usefulness of maintenance therapy with antisecretory drugs in patients who stopped NSAID or aspirin treatment after healing an NSAID-related peptic ulcer.
Prevention The following strategies have been suggested to prevent NSAID- or aspirin-related gastroduodenal peptic ulcer in older adults (Figure 85-17).
FIGURE 85-17. Strategies for the prevention of NSAID- or aspirin-related gastroduodenal peptic ulcer in older patients.
Identifying high-ris k patients (multidimens ional as s es s ment). Current strategies to
reduce ulcer relapse and/or complications are considered cost-effective in
high-risk patients. Great importance is placed, therefore, on defining those patients who are at high risk for peptic ulcer and its complications when they are treated with gastro-damaging drugs. A history of upper GI symptoms, PUD, and/or bleeding; the presence of multimorbidity and concomitant medications, particularly oral steroids, antiplatelet drugs, antithrombotic therapies (ie, low-molecular-weight heparins), and oral anticoagulants increase the risk of NSAID-related PUD and its complications. Moreover, functional and cognitive impairment, malnutrition, immobilization, and social factors, that is, the main determinants of multidimensional frailty, may negatively influence the outcomes and trajectories of drug-related PUD and its complications. Thus, a comprehensive geriatric assessment may be useful in the evaluation of the multidimensional risk of older patients.
Reduce dos age and us e of les s GI-toxic NSAIDs . The risk of peptic ulcer and its complications appears to be directly related to the dose of the given NSAID or coxib. Lower risk of upper GI damage has been reported with NSAIDs with a short plasma half-life versus those with a prolonged plasma half-time. Furthermore, slow-release formulations present an augmented risk of ulcer complications.
Co-treatment with gas troprotective drugs . Misoprostol and PPIs are more effective than H2-blockers in preventing severe gastric and duodenal damage. However, misoprostol administered at effective doses of 200 μg four times
daily has higher adverse effects, particularly diarrhea, than PPIs. PPIs are
very effective in preventing gastroduodenal injuries in both acute and chronic older users of NSAIDs (Figure 85-18). Similarly, PPIs prevent PUD and its complications in older patients treated with low-dose aspirin as antiplatelet therapy, independent of the presence of H pylori infection.
FIGURE 85-18. Absolute risk reduction (ARR) of peptic ulcer and the number needed to treat (NnT) in older acute and chronic users of nonsteroidal anti-inflammatory drugs (NSAIDs) and/or aspirin concomitantly treated with proton pump inhibitors (PPIs). (Data from Pilotto A, Franceschi M, Leandro G, et al. Proton-pump inhibitors reduce the risk of uncomplicated peptic ulcer in elderly either acute or chronic users of aspirin/non-steroidal anti-inflammatory drugs.
Aliment Pharmacol Ther. 2004;20[10]:1091–1097.)
ERADICATION OF H PYLORI INFECTION. NSAID use and H pylori infection are independent risk factors for PUD and gastroduodenal bleeding in older subjects. In H pylori–positive patients who are starting long-term treatment with NSAIDs, the cure of H pylori infection reduces the 6-month risk of PUD. In older high-risk patients, however, the use of PPIs concomitantly with the NSAID reduces the occurrence of both acute and chronic NSAID-related gastroduodenal damage more effectively than the eradication of H pylori infection alone. Moreover, after the eradication of H pylori, maintenance treatment with a PPI is effective in the prevention of ulcer bleeding in older patients. All these findings suggest that H pylori eradication may be a useful strategy to prevent NSAID-related peptic ulcer. Consequently, H pylori infection should be tested for and treated in older people in whom prolonged NSAID use is anticipated.
Educational programs . A crucial strategy in the prevention of NSAID-related adverse events is the discontinuation of NSAID therapy. Indeed, studies from Canada and the United States reported an estimated 37% to over 50% of unnecessary, that is, inappropriate, NSAID prescriptions in older patients with osteoarthritis. Active interventions to improve appropriateness of drug prescription, particularly in older adults, demonstrated a reduction in NSAID prescriptions. For example, a significant reduction in rehospitalization rates for PUD as well as mortality rates within 1 year was observed in older subjects who participated in a quality improvement project in the United States that involved counseling of patients and their caregivers about NSAID toxicity.
Non-NSAID, non–H pylori PUD Antisecretory therapy remains the cornerstone of treatment to promote ulcer healing. Standard doses of PPIs should be prescribed at least for 4 weeks in patients with duodenal ulcers and for 8 weeks in patients with gastric ulcers. Generally, patients respond well to these therapies, and no established evidence supports the need for a longer duration or higher dose of antisecretory therapy in uncomplicated idiopathic ulcer. Older nonresponders should be investigated for any possible underlying pathophysiology, including medication noncompliance, acid hypersecretory state, or use of damaging drugs. Noninvasive evaluation of gastric function tests and referral to a specialist in gastroenterology are advised in case of H pylori-unrelated, persistent ulcers unresponsive to optimized PPI treatment.
Figures 85-19A and B summarize the therapeutic schemes for older people with PUD and its bleeding complication and drug-related gastroduodenal damage.
FIGURE 85-19. (A) Therapeutic schemes for older people to treat H pylori infection, stress ulcer prophylaxis, and peptic ulcer disease and its bleeding complication. (B) Therapeutic schemes for older people to treat and/or prevent drug-related gastroduodenal damage.
UPPER GASTROINTESTINAL BLEEDING
Definition
Upper gastrointestinal bleeding (UGIB) is defined as bleeding derived from a source proximal to the ligament of Treitz, and can be categorized as variceal or nonvariceal hemorrhage. UGIB can be classified as acute (presenting with hematemesis, melena and/or hematochezia) or chronic, suspected because of the detection of occult gastrointestinal blood loss or anemia.
Epidemiology
There has been a decrease in both the incidence and mortality rates due to UGIB across all age groups in the United States. However, UGIB remains a significant cause of hospital admission and mortality in older subjects.
Indeed, the rates of admission for acute UGIB increase 30-fold between the
third and ninth decades, and 70% of acute UGIB episodes occur in subjects older than 60 years. Moreover, in older US residents, the risk of hospitalization for UGIB is associated with age above 80 years, limited instrumental activities of daily living, multiple comorbidities, and polypharmacy.
The mortality of patients who present with variceal hemorrhage has historically exceeded 30%, but has decreased over time, possibly as a result of advances in medical and endoscopic therapy; however, these data come from cohorts of patients with a mean age well below 65 years, and extensive data about older patients with variceal bleeding are lacking.
Pathophysiology
UGIB can be caused by peptic ulcers, gastric erosions, esophageal varices, and other causes such as gastric cancer (Table 85-7).
TABLE 85-7 ■ CAUSES OF UPPER GASTROINTESTINAL BLEEDING
Peptic ulcer disease PUD is the most frequent cause of major, life-threatening acute UGIB. Significant hemorrhage results from erosion of an underlying artery, and the magnitude of bleeding is related to the size of the arterial defect and the diameter of the artery. Consequently, bleeding may be
particularly severe from large posterior duodenal ulcers which erode the gastroduodenal artery as well as from lesser curve gastric ulcers involving branches of the left gastric artery.
The high incidence of acute UGIB in older people has been attributed to many factors, including an increase in the use of NSAIDs as well as a high prevalence of upper GI disorders including GERD and PUD. Moreover, advanced age has been consistently identified as a risk factor for mortality among patients presenting with UGIB, presumably because of the high prevalence of frailty, comorbid illnesses, and polypharmacy in older adults as compared with younger patients.
Esophage al varices and portal hypertensive gastropathy Esophageal and gastric varices are caused by increased venous collateral flow from the portal circulation through the gastric coronary veins, usually because of portal hypertension. Variceal hemorrhage can occur when the hepatic venous pressure gradient exceeds 12 mm Hg. Features predictive of variceal hemorrhage include large variceal size and the presence of red “wale” marks on varices. The vessels may leak blood or even rupture, causing life- threatening bleeding. Portal hypertensive gastropathy results from venous congestion of the gastric mucosa; in most patients, this is caused by portal hypertension from cirrhosis.
Esophagitis and erosive gastritis These are superficial mucosal injuries in the esophagus and stomach, respectively. These lesions are most commonly caused by hypersecretion of gastric acid and/or erosive medications, such as NSAIDs and alendronate (see GERD section).
Mallory-We iss tears These occur at the esophagogastric junction and are due to prolonged retching. Alcohol abuse is the usual cause, but other causes including vomiting (eg, chemotherapy, digoxin toxicity, renal failure, advanced malignancy) may be responsible. Bleeding usually stops spontaneously, and active endoscopic or surgical intervention is seldom required.
Presentation
Hematemesis, bloody gastric aspirate, or melena usually indicates bleeding from the upper GI tract. Older people with UGIB may have specific aspects of clinical presentation that need a comprehensive clinical approach. Indeed, an UGIB may initially present with symptoms such as light-headedness,
syncope, or postural hypotension; neurologic symptoms including transient ischemic attack and/or stroke; cardiovascular symptoms, including ischemic heart disease; or even a delirium episode or behavioral disorders due to the UGIB-related acute anemia or hypotension. These general symptoms are more frequent in frail, comorbid, and multitreated older subjects.
Evaluation
The initial evaluation of the older patient presenting with features of acute UGIB includes a complete medical history, physical examination, and laboratory assessment including serum electrolyte and coagulation parameters, liver biochemical tests, and a complete blood count with the goal of assessing the severity of the bleeding. Details should be documented of prior UGIB episodes, previous abdominal surgery and current medication use, particularly aspirin, NSAIDs and oral anticoagulants, that is, warfarin or the newer oral anticoagulants (NOACs). Life-threatening peptic ulcer bleeding, with all-cause mortality rates as high as 12%, has been observed in patients on NOACs, especially in the presence of comorbidities such as coronary artery disease, stroke, peripheral vascular disease, and heart failure. High-dose dabigatran (150 mg bid), rivaroxaban, and high-dose edoxaban (60 mg daily) are associated with a higher risk of UGIB than warfarin. Other risk factors for NOAC-related UGIB include concomitant use of ulcerogenic agents, older age, renal impairment, H pylori infection, and previous history of UGIB. UGIB in cirrhotic patients has a nonvariceal origin in over 50% of cases. Physical findings of chronic liver disease are suggestive of underlying portal hypertension.
Upper GI endoscopy has three main purposes: (1) to provide an accurate diagnosis, (2) to give prognostic information, and (3) to carry out endoscopic therapy. Early endoscopic examination allows precise identification of the site and nature of the bleeding, and provides prognostic information. Indeed, stigmata of recent hemorrhage, according to endoscopy Forrest classification, predict risk of further bleeding and guide management decisions (Figure 85-20).
FIGURE 85-20. Forrest classification of nonvariceal UGIB. (Data from Gralnek IM, Dumonceau JM, Kuipers EJ, et al. Diagnosis and management of nonvariceal upper gastrointestinal hemorrhage: European Society of Gastrointestinal Endoscopy (ESGE) Guideline. Endoscopy. 2015;47[10]:a1–a46.)
Endoscopy performed within 24 to 48 hours from the bleeding episode reveals an actual or potential site of bleeding in the majority of patients; 15% to 30% of all lesions are actively bleeding at the time of endoscopy. In any case, endoscopy should not be carried out until the patient is adequately resuscitated and in stable clinical conditions. Routine second-look endoscopy, in which repeated endoscopy is performed 24 hours after initial endoscopic hemostatic therapy, is not recommended. A second-look endoscopy however should be performed in patients with clinical evidence of recurrent bleeding.
Management
General management Hemodynamic status should be assessed immediately upon presentation and resuscitative measures begun as needed. Blood transfusions should always be carried out with hemoglobin levels less than or equal to 8 g/dL. In older patients with higher hemoglobin levels, blood transfusions should be performed when clinical evidence of intravascular volume depletion or severe comorbidities, such as coronary artery disease, pulmonary diseases, and renal insufficiency, are present. Risk assessment should be performed to stratify patients into higher and lower risk categories, which may assist in initial decisions such as timing of endoscopy, time of discharge, and level of care.
Peptic ulcer bleeding The best management of acute bleeding ulcer includes a combination treatment with endoscopy (eg, bipolar electrocoagulation, heater probe, clips, and/or injection of sclerosant or vasoactive drugs) and intravenous PPI with a bolus followed by continuous infusion. These treatments reduce the rates of rebleeding and surgical interventions, but
short- and long-term mortality rates are not reduced (Table 85-8). Endoscopic therapy for peptic ulcer hemorrhage seems to be well tolerated in older adults.
TABLE 85-8 ■ PPI THERAPY AND OUTCOME OF ENDOSCOPIC HEMOSTASIS IN BLEEDING PEPTIC ULCER
Pre-endoscopic PPI (80 mg omeprazole or pantoprazole bolus followed by 8 mg/h infusion) seems to decrease the need for endoscopic therapy but does not improve the final clinical outcome. After successful endoscopic hemostasis, continuous intravenous PPI therapy for 72 hours should be administered to patients who have an ulcer with active bleeding, a nonbleeding visible vessel, or an adherent clot. However, a meta-analysis found that oral and IV PPIs have similar efficacy after endoscopic treatment in controlling recurrent bleeding, the requirement for surgery, and mortality in patients with peptic ulcer bleeding. Thus, oral PPIs may be a useful and cost-saving alternative.
Recurrent bleeding after endoscopic therapy is treated with a second endoscopic treatment; if bleeding persists or recurs, treatment with surgery or interventional radiology is undertaken. In case of NOAC-related UGIB, initial management includes withholding the anticoagulant, followed by delayed endoscopic treatment. In severe bleeding, additional measures include the use of specific reversal agents such as idarucizumab for
dabigatran and andexanet alfa for factor Xa inhibitors (rivaroxaban, apixaban, edoxaban), and urgent endoscopic management.
Varice al bleeding Endoscopic variceal band ligation has supplanted injection sclerotherapy under endoscopy, due to a lower rate of complications. The use of nonselective β-blockers seems to be effective for primary prophylaxis of variceal hemorrhage; however, older patients should be monitored closely for adverse effects, which include orthostasis, fatigue, and affective disturbance. Vasoactive drugs (somatostatin or its analogue, octreotide; vasopressin or its analogue, terlipressin) should be initiated as soon as variceal hemorrhage is suspected. Selected patients at high risk of liver failure or rebleeding may be considered for an “early” (within 72 hours) trans-jugular intrahepatic portosystemic shunt (TIPS). However, TIPS placement for control of variceal bleeding may be associated with higher mortality and hospitalization rates in older compared to younger patients.
Thus, the decision regarding use of TIPS should be made in consultation with a gastroenterologist.
UGIB in patients affected by portal hypertensive gastropathy is usually chronic and occult rather than overt and hemodynamically significant.
Reduction of portal venous pressure with a nonselective β-blocker might be beneficial for these patients; however, the data to support this indication are limited, especially in older subjects.
Prognostic evaluation of the older patie nt with UGIB Early stratification of patients into groups with different risk of rebleeding and mortality may be useful to support clinicians in the management of UGIB and in the choice of the most appropriate treatment, that is, medical, endoscopic, or surgical interventions. Indeed, prognostic scores have been developed that include endoscopy- based analysis of bleeding lesions, preendoscopic clinical scores, and combined clinical and endoscopic evaluation.
The Forrest classification is an endoscopy-based tool useful to stratify patients with UGIB into high- and low-risk categories for mortality and rebleeding, and it has been used to evaluate the endoscopic intervention modalities.
It is increasingly evident, however, that the prognosis of older patients with UGIB is strongly affected by a multiplicity of factors, including functional status, cognition, nutrition, multimorbidity, and polypharmacy, which are not directly related to the UGIB itself, suggesting that any prognostic model for these patients should be multidimensional in nature.
Three such models are described below and a summary is provided in Table 85-9.
TABLE 85-9 ■ CLINICAL PROGNOSTIC SCORES IN ACUTE UPPER GASTROINTESTINAL BLEEDING
The Glasgow-Blatchford score (GBS) is a screening tool to assess the likelihood that a patient with UGIB will need blood transfusion or endoscopic intervention. A score of 0 identifies low-risk patients who might be suitable for outpatient management; a patient with a GBS greater than 0 is considered at high risk, and should receive clinical evaluation and surveillance.
The Rockall score (RS) categorizes patients into risk groups. Patients with clinical RS (ie, before endoscopy) greater than 0 and patients with complete RS (ie, after endoscopy) greater than 2 are considered to be at intermediate or high risk for developing recurrent bleeding and death.
The Multidimensional Prognostic Index (MPI) is a prognostic tool based on a standard comprehensive geriatric assessment that predicts short- and long-term mortality in older subjects. As shown in Table 85-9, the MPI values range from 0 (low risk) to 1 (severe risk of mortality). The MPI can also be expressed as three grades of risk. Patients with higher MPI grades have higher risk of in-hospital stay and institutionalization after hospital discharge.
Prevention
Long-term prevention of recurrent bleeding peptic ulcers Prevention is based on the etiology of the bleeding ulcer. Patients with H pylori-associated, NSAID- associated, or idiopathic bleeding ulcers should receive therapy as outlined in the previous section. In patients who must resume NSAIDs, a COX-2– selective NSAID at the lowest effective dose plus daily PPI is recommended.
In patients with low-dose ASA-associated bleeding ulcers, the need for aspirin should be assessed. If given for secondary prevention (ie, established cardiovascular disease), aspirin should be resumed as soon as possible after bleeding ceases in most patients: ideally within 1 to 3 days and certainly within 7 days. Long-term daily PPI therapy should also be provided. If given for primary prevention (ie, no established cardiovascular disease), antiplatelet therapy likely should not be resumed in most patients.
Long-term prevention of recurrent variceal bleeding Although β-blockers have an established role as primary prophylaxis (ie, reducing the risk of variceal bleeding in patients who have never bled), their use as secondary prophylaxis (preventing variceal rebleeding) is not established, and endoscopic variceal ablation by a program of repeated variceal banding is the treatment of choice.
AUTOIMMUNE ATROPHIC GASTRITIS
Autoimmune atrophic gastritis (AAG) may be found during endoscopic evaluation of patients for PUD or UGIB. It is a chronic progressive inflammatory condition that results in the replacement of the normal cell mass by atrophic and metaplastic mucosa. It represents an insidious condition that may go undiagnosed for many years, especially in older people. AAG is more common in women than men (3:1 ratio). There is an age-related
increase in the prevalence of AAG, from 2.5% in the third decade to 12% in the eighth decade. Although the trigger for this disease is incompletely understood, autoantibodies against parietal cells and against intrinsic factor result in achlorhydria and pernicious anemia, respectively. Diagnosis of AAG is based on the presence of mucosal atrophy of the gastric corpus/fundus in biopsy histological samples obtained during esophago- gastro-duodenoscopy. A composite serological test using serum levels of gastrin-17, pepsinogen I and II, and antibodies against H pylori has been reported to be useful to identify AAG, demonstrating a negative predictive value of over 95% and a positive predictive value of above 95%. After serological diagnosis of atrophic gastritis, endoscopic evaluation with biopsy sample obtainment allows for exclusion of eventual neoplastic lesions, which are more frequent in patients with AAG.
AAG has been reported in approximately 20% to 30% cases of iron- deficiency anemia refractory to iron supplementation and vitamin B12 deficiency. Concerning clinical features of AAG, nearly half of subjects are
asymptomatic, especially in the initial phases. However, when present,
symptoms tend to be nonspecific and often misleading such as early satiety, postprandial fullness, and also reflux symptoms, most probably linked to nonacid reflux. Importantly, PPIs are frequently prescribed in these patients, representing a clear case of inappropriate prescription.
Once the diagnosis of AAG has been established, endoscopic surveillance is suggested at 3-year intervals, with the goal of early gastric cancer detection, and referral to a gastroenterologist for follow-up is recommended. Concomitant autoimmune diseases including thyroiditis and diabetes must be excluded, and parenteral supplementation of vitamin B12
must be ensured.
FURTHER READING
Barkun AN, Almadi M, Kuipers EJ, et al. Management of nonvariceal upper gastrointestinal bleeding: guideline recommendations from the International Consensus Group. Ann Intern Med. 2019;171:805–822.
Boghossian TA, Rashid FJ, Thompson W, et al. Deprescribing versus continuation of chronic proton pump inhibitor use in adults. Cochrane Database Syst Rev. 2017; 3(3):CD011969.
Costable NJ, Greenwald DA. Upper gastrointestinal bleeding. Clin Geriatr Med. 2021;37(1):155–172.
Coxib and Traditional NSAID Trialists’ (CNT) Collaboration, Bhala N, Emberson J, Merhi A, et al. Vascular and upper gastrointestinal effects of non-steroidal anti-inflammatory drugs: meta-analyses of individual participant data from randomised trials. Lancet. 2013;382:769–779.
Deutsch D, Romegoux P, Boustière C, et al. Clinical and endoscopic features of severe acute gastrointestinal bleeding in elderly patients treated with direct oral anticoagulants: a multicentre study. Therap Adv
Gastroenterol. 2019;12:1756284819851677.
Kamada T, Satoh K, Itoh T, et al. Evidence-based clinical practice guidelines for peptic ulcer disease 2020. J Gastroenterol.
2021;56(4):303–322.
Kurin M, Fass R. Management of gastroesophageal reflux disease in the elderly patient. Drugs Aging. 2019;36(12): 1073–1081.
Malfertheiner P, Megraud F, O’Morain CA, et al. European Helicobacter and Microbiota Study Group and Consensus panel. Management of
Helicobacter pylori infection-the Maastricht V/Florence Consensus Report. Gut. 2017;66(1):6–30.
Mishuk AU, Chen L, Gaillard P, et al. National trends in prescription proton pump inhibitor use and expenditure in the United States in 2002-2017.
Science Pract Res. 2021;61(1):87–94.
Pilotto A, Custodero C, Maggi S, et al. A multidimensional approach to frailty in older people. Ageing Res Rev. 2020; 60:101047.
Pilotto A, Franceschi M. Helicobacter pylori infection in older people.
World J Gastroenterol. 2014;20:6364–6373.
Tursi A, De Bastiani R, Franceschi M, et al. Non-invasive assessment of gastric secretory function in centenarians. Geriatric Care. 2017;3:6682.
Yamasaki T, Hemond C, Eisa M, et al. The changing epidemiology of gastroesophageal reflux disease: are patients getting younger? J Neurogastroenterol Motil. 2018;24(4):559–569.
Chapter
86
Hepatic, Pancreatic, and Biliary Diseases
Dylan Stanfield, Mark Benson, Michael R. Lucey
INTRODUCTION
The liver and pancreas are remarkable in their ability to preserve function despite advanced age. Older patients are at an increased risk of more severe hepatic injury when exposed to hepatic insults. This increased risk is likely related to the liver’s age-related decrease in regenerative capacity. The aging pancreas can alter its morphology and function, but often times this can be asymptomatic in the older individual. Throughout this chapter we will
review the hepatobiliary and pancreatic changes that are known to occur with aging and their pathologic consequences of disease in older patients.
As will be discussed multiple times in this chapter, most of the therapies available for younger patients are safe and appropriate for use in the geriatric population. However, older patients with advanced pancreatic, liver, or biliary disease may not be eligible for curative treatments in some circumstances. For example, due to comorbid disease, few patients older than 70 years meet entry criteria for liver transplantation. Similarly, geriatric patients with newly diagnosed neoplasms of the pancreas or hepatobiliary system may not tolerate the extensive surgery that may be appropriate in a younger patient. Palliative care for patients with advanced pancreatic and hepatobiliary disease has become a well-recognized subspecialty and provides valuable therapies and counseling to alleviate suffering near the end of life.
LIVER DISEASE
Liver Morphology
For a number of years, liver volume was thought to decrease with age. More recently, it was determined that liver volume remains unchanged over time when adjusting for body surface area. Tagged albumin scans have shown that while corrected liver volume remains constant with age, functional hepatocyte volume decreases significantly. There is also a contemporaneous decrease in hepatic blood flow by approximately 35% to 40%. The cause of decreased blood flow is likely multifactorial—as a result of changes in cardiovascular output, diminished splanchnic blood flow, reduced portal vein blood flow, and increased resistance to portal flow. On the cellular level, both hepatocytes and the mitochondria of hepatocytes become hypertrophied, but decrease in overall number with age. Hepatocytes accumulate lipofuscin with age while undergoing a decrease in the concentration of smooth endoplasmic reticulum (SER), telomere length, and the activity of several liver microsomal enzymes.
Learning Objectives
Learn aging-associated changes in hepatic function, epidemiology, genetics, pathogenesis, and pathology of common hepatic diseases in older adults.
Understand the prevalence, common clinical presentations, diagnosis, and treatment of hepatotropic viruses, autoimmune liver diseases, chronic fatty liver diseases, and drug-induced injury in older patients.
Appreciate the challenges older persons encounter when living with chronic liver disease, including consideration of liver transplantation in older adults.
Learn the epidemiology, genetics, etiology, and pathogenesis of common biliary and pancreatic diseases in older adults.
Understand the common and atypical manifestations, and tests to diagnose common gallbladder and pancreatic diseases in older patients.
Learn state-of-the-art and emerging treatments for biliary and pancreatic disorders in older population.
Key Clinical Points
1. Older adults are more prone to hepatic injury due to aging- associated changes, including a reduction in regenerative
Acquire knowledge about the symptoms, diagnosis, and treatment of gallbladder and pancreatic cancers in older adults.
capacity, functional hepatocyte volume, and hepatic blood flow.
Patients with metabolic syndrome are at high risk for developing nonalcoholic fatty liver disease (NAFLD).
Drug-induced liver injury occurs more frequently in older patients and tends to be more severe.
Chronic fatty liver diseases, alcohol-related and non-alcohol- related, are the two most common causes of end-stage liver disease in older patients in the developed world. The advent of direct-acting antiviral agents for the treatment of hepatitis C virus (HCV) has changed the landscape of chronic liver disease.
Age is a major risk factor for gallstones, which are more common among older adults, especially women.
Acute cholecystitis may present atypically in older patients without fever, nausea, vomiting, or severe abdominal pain.
Ultrasound is the initial diagnostic test of choice, and incidental discovery of gallstones is not an indication for treatment. When appropriate, early laparoscopic cholecystectomy is the treatment of choice in older patients.
In the absence of gallstones and alcohol use disorder, the most common cause of acute pancreatitis in older adults is malignancy.
Alcohol use disorder is the most common cause of chronic pancreatitis in older patients.
The significance of alcohol use disorder in patients presenting with alcohol-related liver or pancreatic disease is frequently not recognized, and the opportunity to intervene with treatment of alcohol use disorder is missed.
Liver Function
Despite the observed changes in functional hepatocyte volume and blood flow, age-related changes to hepatic function are less evident in clinical practice (Table 86-1). The capacity to sustain liver function during aging is
reflected in the ability to successfully transplant livers from older deceased donors. Traditional liver chemistry tests, including serum aminotransferases, bilirubin, alkaline phosphatase, and gamma-glutamyl transpeptidase, do not change with age. Likewise, there are no significant changes in coagulation factors. Serum albumin slightly decreases with age, but typically remains within the normal range. Serum cholesterol and triglycerides increase with age since there is a gradual decline in the metabolism of low-density lipoprotein (LDL) cholesterol.
TABLE 86-1 ■ EFFECTS OF AGE ON THE LIVER
There are age-related changes in the hepatic metabolism of certain medications, which is important since more than 30% of prescription drugs are prescribed to older men and women. The incidence of adverse drug reactions significantly increases with increasing age. Phase I drug metabolism relies on microsomal enzymes and results in metabolism by oxidation, reduction, demethylation, and hydrolysis. Phase II drug metabolism relies on cytosolic enzymes and results in metabolism by conjugation with several different polar ligands. Phase I reactions are usually catalyzed by the cytochrome P450 system in the hepatocyte SER. There is a significant decrease in the phase I metabolism of several medications by as much as 50% with increasing age. Interestingly, medications that undergo phase II metabolism remain unaffected by aging. The activity of phase I metabolism is dependent on oxygen delivery. Thus, some of the age-related decreases in phase I metabolism could be explained by decreased hepatic blood flow as well as by decreased SER concentration. Medications with known phase I metabolism should be started at a low dose and titrated
slowly in order to circumvent the problems associated with adverse drug reactions in older patients (Table 86-2).
TABLE 86-2 ■ DRUGS WITH EXTENSIVE PHASE I METABOLISM
Another important aspect of the effects of aging on hepatic function is the diminished ability of the liver to recover and regenerate in response to injury. There is an age-related proliferative decline in the rate at which partial living donor livers regenerate. Also, there is a higher mortality in older patients after partial hepatic resection. Lastly, the hepatotoxic effects of hepatitis viruses and medications like acetaminophen and amoxicillin- clavulanate are more pronounced in older adults. Drug-induced liver injury will be discussed later. Refer to Table 86-3 and Figure 86-1 for information on patterns of drug injury and pathology findings, respectively.
TABLE 86-3 ■ DRUG-INDUCED LIVER INJURY
FIGURE 86-1. Examples of drug-induced liver injury. A. Macrovesicular steatosis— methotrexate. B. Microvesicular steatosis indicating mitochondrial injury: valproate, tamoxifen, tetracyclines. C. Hepatocellular necrosis in acetaminophen toxicity in zone 3. D. Necrosis due to isoniazid.
Fatty Liver Disease
Nonalcoholic fatty liver disease Older persons have not been spared from the rising prevalence of obesity seen throughout the developed world. In 2010, the predicted prevalence of obesity in Americans, 60 years and older was 37%. This tsunami of obesity has gone hand in hand with rising prevalence of metabolic syndrome, which is comprised of systemic hypertension, insulin resistance, central adiposity, elevated BMI, and dyslipidemia. Patients with metabolic syndrome are at high risk for developing nonalcoholic fatty liver disease (NAFLD). NAFLD is defined as the presence of more than 5% hepatic steatosis on imaging or liver histology without evidence of steatohepatitis or fibrosis and after other causes of liver injury have been ruled out. A minority of patients with NAFLD have nonalcoholic steatohepatitis or NASH which is defined as the presence of greater than 5% hepatic steatosis plus inflammation with hepatocyte injury. Whereas NAFLD without NASH is a relatively benign condition, NASH may progress to fibrosis and ultimately cirrhosis, portal hypertension, and liver failure.
Patients with NASH may develop hepatocellular carcinoma (HCC), which is usually associated with cirrhosis, although NASH-associated HCC in the absence of cirrhosis does occur. In one cross-sectional multicenter study from the United States, when compared to younger patients with NAFLD, older NAFLD patients had a higher prevalence of NASH (56% vs 72%) and advanced fibrosis (25% vs 44%).
NAFLD and NASH have the same presentation in older and younger persons, often as elevated liver-related chemistries and increased fat observed on liver imaging in the setting of metabolic syndrome. The keys to evaluation are the exclusion of secondary causes of liver injury (eg, heavy alcohol use, drug-related, viral hepatitis, metabolic disease), and the discrimination of NASH from NAFLD without NASH. The latter determination can start with noninvasive testing based on blood tests such as the FIB-4 test, followed by estimates of liver fibrosis using elastography.
Patients with fibrosis in the NASH setting are candidates for therapy. In patients with metabolic syndrome and elevated BMI, weight loss is the most effective way to reverse fibrosis and improve liver chemistries. Although diet, exercise, and risk factor modification are widely advocated, they are difficult to accomplish. Weight loss of more than 10% improves insulin sensitivity and liver tests. Although not yet demonstrated unequivocally,
weight loss may also reverse fibrosis. Bariatric surgery is appropriate in selected patients with NASH-associated liver fibrosis.
NASH cirrhosis has become the second most common indication for liver transplantation in the United States, and the most common in women. The specific issues related to liver transplantation in older persons is discussed elsewhere.
Alcohol-associated liver disease Almost 50% of the adults older than 65 years and almost 25% of persons older than 85 years drink alcohol. Alcohol use disorder (AUD) in older persons has been called “the silent epidemic.” AUDs afflict 1% to 3% of older subjects and represent a cause of physical and psychiatric morbidity and social distress. In addition, up to 30% of older patients hospitalized in divisions of general medicine, and up to 50% of those hospitalized in psychiatric divisions present AUDs. Unfortunately, AUD is often underreported or even unrecognized in these clinical settings.
Questionnaires such as AUDIT and AUDIT-C are simple tools that when used on a regular basis will greatly increase the recognition of underlying AUD.
Alcohol-associated liver disease (ALD) refers to a continuum of liver injury ranging from benign deposition of fat to cirrhosis, portal hypertension, and liver failure. In common with NAFLD, ALD causing cirrhosis is associated with HCC. In contrast with NAFLD, ALD may also present with an acute form of liver failure called alcoholic hepatitis. Alcoholic hepatitis is a syndrome of jaundice and a systemic inflammatory reaction in the setting of recent consumption of alcohol. When severe, AH can result in multisystem organ failure (sometimes referred to as acute on chronic liver failure) and death.
Abstinence from alcohol is the sine qua non of all treatment of ALD. Unfortunately, ALD patients, whether younger or older, rarely receive formal treatment for AUD. Sustained abstinence can result in remarkable improvements in ALD, even after decompensating events such as ascites or variceal hemorrhage. Brief courses of corticosteroids offer a modest, albeit short-term, improvement for severe AH. ALD is the most common indication for liver transplantation in North America and Europe.
Viral Hepatitis
Unvaccinated older adults are susceptible to viral hepatitis, which can cause severe illness. Key aspects of the different types of viral hepatitis are
summarized in Table 86-4. This section will focus on the three most common types of viral hepatitis occurring in older people.
TABLE 86-4 ■ HEPATOTROPIC VIRAL DISEASE
Hepatitis A Hepatitis A virus (HAV) is an RNA virus spread via fecal-oral transmission. Acute HAV infection is diagnosed by the demonstration of HAV IgM antibodies within the serum in symptomatic patients. As the acute illness resolves, anti-HAV IgG antibodies develop conferring lifelong immunity.
With the development of improved sanitation, the proportion of adults lacking immunity to HAV has increased. There is a safe and effective vaccine for HAV. Providers should screen and vaccinate older patients traveling to endemic areas. No outbreaks of hepatitis A have been reported at nursing facilities to date. While HAV vaccination is not required for most geriatric patients, it is safe and simple, and providers should have a low threshold for vaccinating susceptible patients of any age.
Hepatitis B Hepatitis B virus (HBV) is a DNA virus and blood-borne pathogen. Nearly 1.2 million people in the United States and over 350 million people worldwide are infected with chronic hepatitis B. Less than 5% of acute HBV infections arising de novo in adults living in the United States will lead to chronic infections. Acute HBV infection is relatively rare in older adults, as the primary risk factors associated with transmission are intravenous drug use and high-risk sexual behavior. There is an increased
risk of becoming a chronic HBV carrier with increasing age at acquisition, which is possibly related to an age-related decline in cellular immunity.
Chronic HBV infection is a risk factor for hepatocellular carcinoma, and this risk increases with age. With regards to therapy, entecavir and adefovir are safe to use irrespective of age. Lastly, there is a significant age-related decreased antibody response rate after immunization with the HBV vaccine. Older patients might require an additional injection in order to improve immunogenic HBV vaccine response.
Hepatitis C Hepatitis C virus (HCV) is an RNA virus and blood-borne pathogen. Prevalence data varies widely, but it is estimated 71 million people are infected with chronic hepatitis C worldwide. Acute HCV infections in the United States are almost exclusively confined to persons injecting street drugs. Chronic HCV infection develops in 50% to 80% of infected individuals with the subsequent development of cirrhosis in a significant proportion of these patients. Chronic HCV-related cirrhosis is associated with the development of hepatocellular carcinoma, and this risk is increased significantly with age. Also, the severity of liver disease among patients with chronic HCV infection is worse with increasing age. Similarly, the progression to cirrhosis is faster in older patients, and the serum HCV viral load is significantly higher when compared to younger adults. The older population should be screened for HCV, especially patients who used intravenous or intranasal drugs or received blood products prior to 1992.
In the developed world, the landscape of HCV has been changed radically by the development of highly effective, direct-acting, oral antivirals. These agents are easy to use, often for as little as an 8-week course, and have few side effects in most patients. The AASLD-IDSA clinical guideline is an excellent resource for practitioners who are not familiar with using these agents. Eradication of HCV is achieved in more than 90% of patients. The presence of cirrhosis is one factor that impairs responsiveness. Eradication of HCV in patients with established cirrhosis diminishes but does not remove the risk of development of HCC, and surveillance should be continued (see HCC below). Global challenges to the goal of HCV eradication include educating patients and treatment providers on the need to screen for HCV, increasing screening and diagnosis, as well as ensuring these efficacious medications are accessible and available.
Drug-Induced Liver Injury
Older patients consume the largest portion of both prescription and over-the- counter medications. Both age and polypharmacy are risk factors for drug- induced hepatotoxicity. Older patients often have altered pharmacokinetics and pharmacodynamics caused by changes in renal, hepatic, cardiovascular, and pulmonary function and decreased body mass. Drug-induced liver toxicity occurs more frequently in older patients and tends to be more severe. Clinically, drug-induced hepatotoxicity presents with nonspecific symptoms or subclinically, reflected only in serum laboratory abnormalities. Some medications have a well-recognized increased risk of liver toxicity with age. For example, isoniazid frequently causes some evidence of liver toxicity in patients older than 50 years, yet, rarely causes liver damage in patients younger than 20 years. Health care providers taking care of older patients should be aware of this increased risk of drug-induced hepatotoxicity and should be vigilant in assessing for hepatic inflammation. Medications that are heavily metabolized by the liver, some of which are listed in Table 86-2, should be initiated at a dose 30% to 40% lower than average doses used in middle-aged adults. LiverTox is a very helpful website sponsored by the National Institutes of Health for drug-specific information (http://livertox.nih.gov/). Refer to Table 86-3 and Figure 86-1 for information on patterns of drug injury and pathology findings, respectively.
Primary Biliary Cholangitis
Primary biliary cholangitis (PBC) is characterized by an immune-mediated destruction of the intralobular biliary system. It is predominantly regarded as a disease affecting middle-aged women, but since many patients with PBC live into old age, it is seen both as a new diagnosis or a continuing diagnosis in older patients. Many patients remain asymptomatic and are diagnosed on the basis of an incidentally found elevated alkaline phosphatase level, though pruritus or lassitude may present as early symptoms. Patients with PBC have a higher incidence of other autoimmune conditions such as thyroiditis, Sjögren syndrome, and celiac disease. Disruption of the metabolism of the fat-soluble vitamins leads to an increased risk for developing osteoporosis. The antimitochondrial M2 antibody is a highly sensitive and specific serologic test for PBC, so that liver biopsy is rarely required for diagnosis.
PBC is a slowly progressive disease that leads to portal hypertension and cholestatic liver failure in some patients. Treatment is directed to symptom management of pruritus, correction of malabsorption of fat-soluble vitamins,
and arresting progression using ursodeoxycholic acid. A small minority of patients may benefit from adding obeticholic acid to ursodeoxycholic acid, although this agent exacerbates pruritis and is contraindicated in patients with decompensated liver failure. Liver transplantation remains the ultimate therapy and should be considered in well-selected geriatric patients.
Autoimmune Hepatitis
Autoimmune hepatitis is a condition of unknown etiology leading to chronic hepatic inflammation and destruction, with resultant cirrhosis in untreated patients (see Mack et al. for a recent comprehensive review). Autoimmune hepatitis has a bimodal onset, with a second peak of presentation in persons in their sixth decade or older. While autoimmune hepatitis may be present with elevated serum aminotransferases that are discovered incidentally in an otherwise healthy older person, it has a wide spectrum of presentation including new-onset severe liver injury with markedly elevated aminotransferases. Initial serologic workup should include IgG antibody level, antinuclear antibody, and anti-smooth muscle antibody, and exclusion of other causes of liver injury. Liver biopsy is essential for making the diagnosis. Autoimmune hepatitis generally responds well to immunosuppressive therapy. Patient who have features of both autoimmune hepatitis and PBC or PSC, so-called “cross-over syndromes,” present a diagnostic and therapeutic challenge. For years, prednisone and azathioprine have been the cornerstones of treatment. More recently, budesonide has replaced prednisone in many cases for both initiation and maintenance therapy, thereby limiting the systemic side effects of corticosteroids.
Budesonide is not recommended in patients with established cirrhosis. Older patients receiving chronic therapy with corticosteroids need to be monitored closely for serious side effects such as osteoporosis, glucose intolerance, and cataract formation. Prognosis is likely similar between older patients and younger adults.
Primary Sclerosing Cholangitis (PSC)
PSC is a chronic condition of inflammation and scarring of medium and large-sized bile ducts. Up to 70% of patients with PSC involving both large and medium-size ducts have accompanying chronic inflammatory bowel disease, usually ulcerative colitis, whereas 10% of persons with IBD of the colon will have PSC at some point in their clinical illness. Some other
variants are often included under the rubric of PSC including IgG4 cholangiopathy, and cross-over syndromes autoimmune hepatitis. Chronic inflammation and structuring of the smaller bile ducts are sometimes called “small duct PSC.” It is not associated with IBD and may represent a form of AMA-negative PBC. PSC should be distinguished from forms of secondary biliary sclerosis, which may have vascular, infectious, malignant, or drug- induced origin. Typically, PSC presents in younger persons, but presentation in older age occurs also. Pruritus’ may be a prominent symptom. Elevation of the serum markers of cholestasis (alkaline phosphatase and later total bilirubin) is chrematistic. In contrast to PBC and autoimmune hepatitis, autoantibodies are of little clinical utility. Diagnosis is made by cholangiography, initially MRCP. ERCP may be both diagnostic and therapeutic. While PSC causes characteristic changes in the liver (peribiliary sclerosis or “onion skinning”), small intralobular bile ducts are often less affected, and the liver biopsy features are often nonspecific and are rarely required to make the diagnosis. There is no approved medical treatment for PSC. The keys to management are:
Control of symptoms, particularly itching
Recognition and treatment of episodic ascending cholangitis
Endoscopic management of dominant biliary strictures, when possible
Surveillance for cholangiocarcinoma
Approximately half of all patients with PSC will develop dominant
biliary strictures leading to clinically significant obstructions. The specter of cholangiocarcinoma hangs over all patients with PSC and most cholangiocarcinomas develop within dominant bile duct strictures.
Interventional endoscopists are able to treat dominant strictures through endoscopic dilation and stenting, and to survey suspicious strictures with cytology, ERCP-guided trans-papillary biopsy, endoscopic ultrasound with fine needle aspiration, and cholangioscopy with direct biopsy. There is no established guideline for interval surveillance, but annual MRI/MRCP (where available) is a good option. Ultimately, liver transplantation is the treatment of choice in selected individuals.
Living With Chronic Liver Disease
Cirrhosis of the liver results in derangements in three interconnected clinical aspects of liver physiology: liver blood flow, hepatic metabolism, and the formation and secretion of bile. Perturbations to these functions may cause greater problems for older patients than for their younger counterparts.
Nowadays, many patients with established liver disease survive past 70 years and experience a combination of frailty and portal hypertension with recurrent ascites or hydrothorax, encephalopathy, and fatigue. Often, the process of injury to the liver continues unabated while the patient experiences these decompensating symptoms. Excessive alcohol consumption, or increased body mass in the setting of metabolic syndrome are just two common explanations for how liver injury can progress despite clinical deterioration. The resistance to portal blood flow due to pre- and intrasinusoidal injury in cirrhosis proceeds silently at first. The asymptomatic cirrhotic patient may have porto-systemic varices, splenomegaly, and thrombocytopenia. The most common first clinical manifestation of portal hypertension is ascites, or its variant portal hypertensive hydrothorax. The consequences of portal hypertension and liver failure which are called “decompensation” (ie, ascites, hydrothorax, encephalopathy, variceal hemorrhage, slow gastrointestinal hemorrhage from gastropathy) can prove difficult to manage in an older patient, particularly if he or she has other systemic diseases such as impaired kidney function or COPD. There has been an increasing recognition of the adverse effects of advancing cirrhosis on muscle mass.
The first and most important step in assisting older patients with decompensated liver disease is to have a frank discussion with them and their caregivers about the prospects for improvement in symptoms and prognosis, and then to set their goals of therapy. Wherever possible, we advocate that the injurious process be arrested, for example by institution of robust abstinence from alcohol in alcohol-associated liver failure.
Next, we aim to find a balance between the symptoms and the distress caused by treatment. A brief review of treatment of recurrent ascites or encephalopathy in an older patient illustrates the special challenges posed by the combination of liver failure and advanced age. Recurrent ascites due to portal hypertension with or without hydrothorax causes pain, dyspnea, and immobility and carries the risk of spontaneous bacterial peritonitis. Many older patients find it difficult to adhere to dietary salt restriction. The standard next step is to increase free water clearance with diuretics. Older
patients may not tolerate diuretics because of urinary incontinence or impaired kidney function. When diuretics fail or induce rising creatinine, or electrolyte disturbance (hyponatremia, hyperkalemia), the choices are limited and unsatisfactory: intermittent large-volume paracentesis which requires transport, is painful, and may lead to infection; or a transjugular intrahepatic portosystemic shunt (TIPS), which is confounded by encephalopathy. In a comfort-directed clinical plan, placement of an indwelling intraperitoneal catheter is reasonable, but should be viewed as an option for no more than a few final weeks.
Hepatic encephalopathy in an older patient is both difficult to improve and to live with. It complicates other forms of age-related changes in memory and mental acuity, making it difficult to determine the contribution of progressive dementia and what is metabolic. Hepatic encephalopathy shows capricious variability. One of the biggest issues for some patients is the recommendation that they stop driving. Each new episode of encephalopathy necessitates a search for a precipitating event: variceal hemorrhage, electrolyte disturbance, inadvertent misuse of sedatives or other medicines that alter the sensorium, clandestine intracranial hemorrhage, and infection (such as SPB). Treatment of hepatic encephalopathy is less than satisfactory. Many patients find lactulose to be unpalatable, and they dislike the accompanying loose stools. Other patients may choose to stop lactulose on account of the taste or to avoid flatulence and fecal incontinence.
Frail patients with recurrent ascites or encephalopathy may be too ill to manage at home and require skilled nursing. Liver transplantation is an appropriate consideration in carefully selected patients, and age alone is not a contraindication. However, a careful assessment of comorbidities typically shows that many older patients, particularly after age 70, are not suitable candidates for liver transplantation. When in doubt, it is always appropriate for the primary care provider or geriatrician to contact their liver transplant center and discuss referring their patient. Similarly, other interventions such as TIPS are more hazardous in the over-70 cohort. There is a growing consensus that palliative care directed at improving symptom management and quality of life needs to become a priority in the care of patients with chronic liver disease.
HEPATOBILIARY CANCER
Hepatocellular Carcinoma
In Western Europe and North America, hepatocellular carcinoma (HCC) usually occurs in older patients, although the age of presentation is lower in communities where hepatitis B viral infection is endemic. HCC arises typically as a consequence of chronic inflammation of the liver that has resulted in cirrhosis. Consequently, in the individual patient, the prognosis of HCC is intertwined with the clinical stage and prognosis of cirrhosis (see the discussion of living with cirrhosis). Patients with cirrhosis should undergo surveillance for HCC with serial imaging and measurement of serum alpha feto protein (AFP). While controversial, we recommend sonography and AFP every 6 months, until clinical judgement suggests that further surveillance lacks utility. Treatment of carefully selected geriatric patients with HCC, whether by surgical resection, liver transplantation, or by anticancer therapies such as radiofrequency ablation, chemoembolization (TACE), radioembolization (yttrium-90), or chemotherapy, appears to have a similar morbidity and mortality as compared to similar treatment given to younger patients. Thus, age should not be a contraindication for therapy for HCC in the appropriate older patient. In many tertiary centers, management of HCC is conducted in a team approach involving the subspecialities of hepatology, medical and radiation oncology, surgical oncology, surgical transplantation, and interventional radiology.
Cholangiocarcinoma
Cholangiocarcinoma may arise de novo, in association with PSC, or more rarely with secondary forms of chronic biliary inflammation.
Cholangiocarcinoma affecting the large bile ducts may be discovered on surveillance in PSC or on presentation of ascending cholangitis or biliary obstruction. It is often difficult to make the diagnosis of cholangiocarcinoma since the tumor is hypocellular, arises in a desmoid scar, and usually is difficult to access. Treatment for cure is also challenging, although a minority of cases respond to surgery (Whipple procedure, or liver transplantation after extensive directed radio- and chemotherapy).
Gallbladder Cancer
Cancer of the gallbladder is uncommon in the United States and carries a poor prognosis. Often, this diagnosis is not made until late in its course. The 5-year survival for local disease is 42%, but drops to 0.7% with distant
spread of disease. SEER data show that gallbladder cancer occurs primarily in older adults. Approximately 75% of cases occur in patients older than 65 years. It has been noted that 80% of patients with gallbladder cancer have a history of cholelithiasis. Chronic inflammation from gallstones is thought to induce metaplasia of the gallbladder. Prophylactic cholecystectomy in certain higher risk groups is controversial. Among those who may benefit from this procedure include patients with a porcelain gallbladder, gallbladder polyps greater than or equal to 1 cm and patients with a congenital defect in the junction of the pancreatobiliary duct.
BILIARY DISEASE
Cholelithiasis
Table 86-5 outlines the risk factors associated with gallstone formation. Age is a major factor, although the reasons are unclear. By the ninth decade of life, the prevalence of gallstones is 38% in women and 22% in men. The prevalence of gallstones in the US population increases by 1% per year in women and by 0.5% per year in men after age 15. The increase in risk in women is related to increased biliary cholesterol excretion by estrogen.
Approximately 500,000 people in the United States develop symptomatic gallstones each year. The incidence of gallstones is increasing within the United States due to the increasing incidence of obesity.
TABLE 86-5 ■ RISK FACTORS FOR GALLSTONE FORMATION
The majority of patients with gallstones are asymptomatic, and the gallstones are found incidentally when a patient undergoes abdominal imaging. It is often a challenge to determine whether symptoms such as
chronic nonsevere abdominal pain are causally related to the newly discovered gallstones. Serious complications that may arise from cholelithiasis include severe abdominal pain (“biliary colic”), acute cholecystitis, chronic cholecystitis, or cancer of the gallbladder. These complications can be especially problematic in older patients with comorbid conditions.
Choledocholithiasis
With gallbladder contraction, gallstones can enter and potentially obstruct the common bile duct. Such obstructions can lead to pain, jaundice, ascending cholangitis, liver abscess, and acute pancreatitis. The prevalence of stones in the common bile duct increases with age. Although dilatation of the common bile duct correlates with choledocholithiasis, the common bile duct may become dilated as a result of the normal aging process or following a cholecystectomy due to the reservoir effect. Comparison to previous imaging studies to evaluate for interval change is important when available.
The classical signs and symptoms of biliary obstruction are called biliary colic and include epigastric or right upper quadrant pain, nausea, vomiting, and pruritus. Patients may develop dark urine and acholic stools. The presence of fever and jaundice is concerning for ascending cholangitis. The constellation of abdominal pain, jaundice, and fever, known as Charcot triad, is helpful in making the diagnosis of ascending cholangitis.
Acute Cholecystitis
Approximately 50% to 70% of cases of acute cholecystitis occur in the geriatric population. This condition is characterized by prolonged obstruction of the cystic duct by one or more gallstones. This leads to ischemia, inflammation, and possibly infection of the gallbladder. Acalculous cholecystitis, which occurs in 5% to 10% of cases, refers to inflammation of the gallbladder in the absence of gallstones. It is usually idiopathic, but tends to occur in debilitated and or vasculopathic patients. The severity of cholecystitis varies from mild gallbladder wall edema to severe inflammation and, at its most catastrophic, necrosis or perforation of the gallbladder.
Acute cholecystitis has several classic clinical manifestations. The usual symptoms include abdominal pain, especially in the right upper quadrant or epigastrium, which may radiate to the back or shoulder. It is typically
continuous and severe. Patients may notice the onset of such pain after eating a fatty meal. Fevers, chills, nausea, and vomiting are common as well.
Acute cholecystitis may present atypically in older adults and with delayed recognition results in a greater risk of complications. Older patients usually have right upper quadrant or epigastric pain and tenderness, but other signs and symptoms may be lacking. Older patients present without fever, nausea, or vomiting in more than 50% of acute cases. These signs may not be present in 33%, even in the setting of gallbladder gangrene or perforation.
Acalculous cholecystitis is more common in older adults, and diagnosis of such cases may be more difficult because of the lack of stones on imaging studies.
Chronic Cholecystitis
Chronic cholecystitis refers to biliary pain caused by recurrent episodes of cystic duct obstruction or direct irritation of the gallbladder wall due to stones leading to a chronic inflammatory response. Ultimately this may lead to scarring, fibrosis, and gallbladder dysfunction. On ultrasound or CT imaging, chronic cholecystitis has a more subtle appearance than acute cholecystitis. As with acute cholecystitis, elective cholecystectomy is the treatment of choice.
Investigations
In acute cholecystitis, ultrasound typically shows gallstones and pericholecystic fluid and/or edema. A sonographic Murphy sign is helpful in establishing the diagnosis. Ultrasonography will demonstrate the presence of stones with an accuracy of 90%. A HIDA (99mtechnetium-N-substituted hepatoiminodiacetic acid) scan will show opacification of the bile ducts, but
not of the gallbladder, in cholecystitis.
Ultrasound is the usual initial diagnostic test of choice for choledocholithiasis with a sensitivity of 20% to 38% and specificity of 80% to 100% (Table 86-6). It has the advantage of being noninvasive, easily tolerated by the patient, widely available, and inexpensive. Abdominal CT has higher sensitivity than ultrasound (50%–88%), but often fails to identify stones less than 5 mm. Magnetic resonance cholangiopancreatography (MRCP) allows for visualization of biliary anatomy as well as stones and has sensitivity of 57% to 100% and specificity of 73% to 100% for common bile duct stones. It also has the advantage of being noninvasive, but its
disadvantages include cost and potential patient discomfort from claustrophobia.
TABLE 86-6 ■ IMAGING AND THERAPIES FOR CHOLEDOCHOLITHIASIS
Endoscopic ultrasound (EUS) is a useful imaging modality in cases where the presence of stones in the common bile duct is suspected, but remains uncertain. EUS has sensitivity and specificity of 94% and 95%, respectively, and sedation needs are similar to an upper endoscopy.
Endoscopic retrograde cholangiopancreatography (ERCP) is a therapeutic modality for treatment of choledocholithiasis. The injection of contrast into the biliary tree to obtain a cholangiogram allows visualization of all but very small stones. A biliary sphincterotomy is routinely performed to remove common bile duct stones and to prevent recurrent obstruction. In the case of cholangitis, a stent may be left in the common bile duct to drain purulent material, stones, and sludge. The disadvantages of ERCP include the potential for procedural complications such as acute pancreatitis, technical and infrastructural requirements to complete the procedure, and cost.
However, ERCP poses no greater risk in older adults than in the younger population. One study of ERCP in the geriatric population showed no difference in the rate of therapeutic ERCP complications in patients older than 80 years (6.8%) versus those younger than 80 years (5.1%). In a retrospective cohort between 2002 and 2005, there were no significant differences in successful biliary drainage, complications, or mortality in a group of 178 patients older than 75 years compared to 159 patients less than 75 years. Another study evaluated the safety of ERCP in patients older than 90 years, and the complication rate in this age group was 6.3% compared to 8.4% in patients aged 70 to 89. Therefore, ERCP is a safe procedure even in
those with very advanced age. The rate of recurrence of symptomatic CBD stones after ERCP was higher in patients older than 80 years (20%) versus 4% in patients 50 years old or younger. Thus, older adults are at higher risk for needing a repeat ERCP. Despite this increased procedural burden, the American Society of Gastrointestinal Endoscopy (ASGE) does not list advanced age as an independent risk factor for the development of post- ERCP pancreatitis, infection or higher bleeding rates. Refer to Figure 86-2 for review of multiple ERCP images.
FIGURE 86-2 A–D. Multiple ERCP images. A. Normal ampulla with cannulation. B. Bulging, ulcerated ampulla found to be adenocarcinoma s/p sphincterotomy and plastic biliary stent placement. C. Common bile duct stone removed with balloon. D. Common hepatic duct stricture due to cholangiocarcinoma. Endoscopic and fluoroscopic images s/p metal biliary stent placement. (Reproduced with permission from Deepak Gopal, MD, University of Wisconsin, Madison, WI.)
Treatment
Incidental discovery of gallstones is not an indication for therapy. Because asymptomatic gallstones tend to follow a benign course, cholecystectomy should be considered primarily in those with symptoms or complications.
Prompt treatment of acute cholecystitis is warranted to prevent clinical deterioration, complications, and progression to chronic cholecystitis. In addition to intravenous fluids, antibiotics, and pain control, surgery is the mainstay of treatment. Each patient’s clinical status and comorbidities must be considered in the decision to proceed with general anesthesia and surgery. Laparoscopic cholecystectomy has several advantages over open cholecystectomy including less discomfort, shorter hospital stay, and lower cost. Complications from this procedure often involve biliary trauma. Age and comorbidities are predictors of surgical outcomes. Timing of intervention is also important with better outcomes in patients who undergo early laparoscopic cholecystectomy for acute cholecystitis. Whenever possible, this should be performed within the same hospital stay for definitive treatment of gallstone disease.
As discussed above, in older patients with acute cholecystitis who are felt to be suitable surgical candidates, laparoscopic cholecystectomy is a reasonable option. However, for older patients who are very ill or have
significant comorbidities, gallbladder drainage prior to surgery may be a viable alternative. Percutaneous cholecystostomy drainage is an effective and often definitive treatment for both acute cholecystitis as well as acalculous cholecystitis in select cases. Patients with symptomatic gallstone who cannot or do not wish to undergo invasive procedures may benefit from dissolution of gallstones with ursodiol. This medication is only useful in the setting of cholesterol gallstones. However, at least 50% of patients treated with ursodiol for gallstones will have recurrent stone formation.
PANCREATIC DISEASE
Anatomy and Physiology
Figure 86-3 shows the gross anatomy of the pancreas. The pancreas can be divided into the endocrine and the exocrine pancreas. Four different types of islet cells comprising the endocrine pancreas produce hormones such as insulin, glucagon, pancreatic polypeptide, and somatostatin. The exocrine pancreas is composed primarily of acinar cells and duct cells. The acinar cells produce the digestive enzymes. The key enzyme is trypsinogen. The duct cells produce the bicarbonate-rich fluid for secretion. The exocrine pancreas has proteases, lipases, glycosidases (amylase), and nucleases. Most proteases and the nucleases are stored in an inactive form. Trypsinogen, the primary protease, is activated to trypsin in the duodenum by enterokinase.
Trypsin subsequently activates the other digestive enzymes and, in the presence of calcium, activates trypsinogen to trypsin. In the absence of calcium, trypsin has negative feedback by degrading other trypsin molecules.
FIGURE 86-3. Normal anatomy of the pancreas. (Reproduced with permission from Morton DA, Foreman KB, Albertine KH. The Big Picture: Gross Anatomy. New York, NY: McGraw Hill; 2011.)
There are two additional important mechanisms regulating pancreatic function. First, acid in the duodenum stimulates secretion of the hormone secretin and through vagal mediation leads to the duct cells of the pancreas to secrete bicarbonate. The second regulatory mechanism involves the
cholecystokinin (CCK) feedback loop. Human pancreatic acini lack CCK receptors. The presence of protein in the duodenum effectively increases cholecystokinin releasing factor (CCK RF), which stimulates CCK release. The elevated CCK level is recognized by the brain, leading to vagal stimulation of the acinar cells of the pancreas to secrete more digestive enzymes.
Aging and the Pancreas
Table 86-7 outlines the age-related changes of the pancreas. The maximal volume of pancreatic juice excreted in response to secretin and CCK stimulation increases until the fifth decade and thereafter decreases steadily. Bicarbonate decreases steadily after the fourth decade of life. Also, in ERCP studies, a slight increase in size of the pancreatic duct in the head and body but not the tail has been noted with aging. Despite these anatomic and physiologic changes, only rare cases of pancreatic exocrine deficiency are clinically apparent in healthy older adults.
TABLE 86-7 ■ AGING AND THE PANCREAS
Over and above the normal changes of aging, advanced age poses other threats to pancreatic structure and function. Pancreatitis secondary to alcohol tends to be a disease of middle age, while gallstone disease continues to have a relatively high incidence in older cohorts. The biggest risk in the older population is an increased incidence of malignancy, particularly adenocarcinoma. In addition, during imaging studies of the abdomen,
incidental small cystic and solid lesions of the pancreas are now frequently found.
Acute Pancreatitis
Acute pancreatitis has a variety of etiologies, but all eventually lead to activation of the digestive enzymes particularly trypsin in the pancreatic acinar cells. The most common etiologies for acute pancreatitis are gallstones and alcohol. However, in an older patient without gallstones or a history of chronic excessive use of alcohol, the suspicion for underlying malignancy needs to be especially high. Inherited defects in the trypsinogen to trypsin pathway may lead to rare forms of acute pancreatitis, termed “hereditary pancreatitis.” Obstruction of the pancreatic duct by a gallstone is an important mechanism of initiating pancreatic injury. Alcohol, on the other hand, causes mitochondrial dysfunction, which allows for increased levels of intracellular calcium and subsequent activation of trypsin.
Hypertriglyceridemia is often overlooked as a cause of acute pancreatitis. Triglycerides should be measured on admission to hospital because levels will often quickly decrease with bowel rest and fluids. Although many medications may cause acute pancreatitis, medication-induced acute pancreatitis is uncommon. See Table 86-8 for etiologies of acute pancreatitis.
TABLE 86-8 ■ ETIOLOGIES FOR PANCREATITIS
The incidence of acute pancreatitis is increasing in the United States. The predominant symptom of acute pancreatitis is epigastric pain, commonly with radiation into the back. Nausea and vomiting are also frequently present.
Acute pancreatitis is a clinical diagnosis with support from elevated serum amylase and lipase. Serum amylase or lipase levels at least 3 times the upper limit of normal are typical of acute pancreatitis. Lipase is more sensitive than amylase and will stay elevated for a longer period of time. Many other conditions cause minor elevations in amylase or lipase. Radiographic studies have little role in diagnosis, but do play important roles in evaluating etiology (gallstones, neoplasia, pancreatic calcification indicating chronic pancreatitis) and complications such as necrosis, and pseudocysts. Acute pancreatitis can be divided clinically into mild (absence of organ failure or local complications), moderate (transient organ failure), and severe (persistent organ failure). The range of disease can be from self-limited mild disease to multiorgan failure and death. Approximately 20% of patients with acute pancreatitis will have severe disease due to pancreas necrosis. The overall mortality for patients with acute pancreatitis is 5%. Older patients with comorbid cardiopulmonary diseases are at increased risk of clinical complications due to acute pancreatitis. A variety of grading prognostic
systems including Ranson criteria, Apache score, Glasgow score, and the Baltazar CT grade ± necrosis do correlate with morbidity and mortality.
Treatment of mild pancreatitis is accomplished with bowel rest, intravenous fluids, and pain control. Intravenous fluids, when given within the first 72 hours of presentation, can lead to improved clinical outcomes. Gallstone pancreatitis is treated with ERCP, biliary sphincterotomy, and stone extraction if cholangitis or evidence of biliary obstruction is present. However, the routine use of urgent ERCP is not recommended in patients with acute biliary pancreatitis without cholangitis. In patients with severe pancreatitis due to gallstones, cholecystectomy should be undertaken during the initial hospital admission.
Severe pancreatitis often becomes management of multiorgan failure within an ICU setting. Although the use of prophylactic antibiotics is not advised solely because of predicted severe AP and necrotizing pancreatitis, evidence of infected necrosis on CT scan warrants intravenous antibiotics with good pancreatic penetration such as imipenem. Patients who develop sepsis in the setting of pancreatic necrosis should be surveyed by interventional radiology or interventional gastroenterology for biopsy, gram stain, and culture of the necrotic area to help tailor antibiotic therapy.
Evidence of infected pancreas necrosis may require temporary percutaneous or endoscopic drain placement or surgical debridement. In patients with acute alcohol-related pancreatitis, brief intervention to address alcohol use disorder is appropriate during admission, and follow-up referral to a formal addiction program is recommended.
The presentation of acute pancreatitis due to alcohol is an opportunity to initiate therapy for alcohol use disorder. Unfortunately, AUD is often overlooked, and this is a particular risk for older persons. The patient with combined pancreatitis and alcohol use disorder should be offered brief therapeutic intervention for AUD while an inpatient and referred for formal treatment of AUD on discharge.
Pseudocyst formation is an important complication of acute pancreatitis. Approximately 10% of patients with acute pancreatitis proceed to formation of these fluid collections. Most pseudocysts resolve with time; however, if symptoms are present (early satiety, pain, infection, bleeding), then drainage via interventional radiology, endoscopic cystogastrostomy, or surgery may be necessary.
Chronic Pancreatitis
Since 2016, the major pancreas societies have adopted a new “mechanistic definition” of chronic pancreatitis that affirms the characteristics of end-stage disease as pancreatic atrophy, fibrosis, pain syndromes, duct distortion and strictures, calcifications, pancreatic exocrine dysfunction, pancreatic endocrine dysfunction, and dysplasia, but also addresses the disease mechanism as a pathologic fibroinflammatory syndrome of the pancreas in individuals with genetic, environmental, and/or other risk factors who develop persistent pathologic responses to parenchymal injury or stress.
Alcohol use disorder accounts for approximately 70% of chronic pancreatitis. Idiopathic causes are the second most common cause of chronic pancreatitis. In older adults, it is important to evaluate for tumors compressing the pancreatic duct as a possible etiology.
The diagnosis of chronic pancreatitis can be challenging to make in the early stages of the disease. Approximately 85% of patients will have epigastric postprandial pain. Amylase and lipase can be mildly elevated or normal. Structural changes, such as pancreas atrophy, ductal dilation, and calcifications, can be detected by ultrasound, x-ray, or CT scan and are found in 30% to 40% of cases, leading to a diagnosis. Often, more advanced imaging studies such as ERCP, MRCP, or EUS are used to evaluate the anatomy of the pancreas and give a diagnosis. The major features of chronic pancreatitis include pain, malabsorption, and frequently diabetes.
Treatment is based on resolving malabsorption with pancreatic enzyme replacement. Narcotic pain medication is often needed. If a dilated main pancreatic duct is present, a pancreaticojejunostomy (Puestow procedure) can be helpful. In some cases, near total pancreatectomy is needed with or without islet cell transplantation.
Pancreatic Cancer
Approximately 34,000 new cases of pancreatic cancer will be diagnosed, and 33,000 patients will die secondary to pancreatic cancer each year.
Approximately 87% of patients will be older than age 55 at diagnosis with the median age being 72. The overall 5-year survival is 5%. Unfortunately, little progress has been made to change the mortality from this disease over the last several decades. The genetics have now been further elucidated showing that 85% of adenocarcinomas of the pancreas have an activating point mutation in the K-ras oncogene. In addition, 95% have an inactivated
p16 tumor-suppressor gene. Genetic changes and new knowledge about the cytokine milieu are continued areas of active research.
More than 70% of adenocarcinomas arise in the head of the pancreas, which leads to common presentations including jaundice, gastric outlet obstruction, and pain. Approximately 15% of lesions do not have vasculature invasion or metastatic disease on presentation making surgical resection possible, with curative intent. The 5-year survival in this population is approximately 20%. Staging with CT scans and EUS to evaluate for metastatic disease and vascular invasion is thus important.
Other tumors exist in the pancreas including neuroendocrine tumors and many types of cystic lesions of the pancreas. Neuroendocrine tumors of the pancreas are typically solid tumors. Evaluation by EUS with or without FNA and surgical resection are often performed except in multiple endocrine neoplasia (MEN) 1, where the pancreatic lesions are often multifocal in nature.
Pancreatic Cysts
Cystic lesions of the pancreas are becoming increasingly recognized because of more widespread abdominal imaging (Table 86-9). Incidental pancreatic cysts are noted on 1% of abdominal CT scans obtained for any reason. These lesions include pseudocysts, congenital cysts (also known as simple cysts), and cystic neoplasms including serous cystadenomas (SCN), mucinous cystic neoplasms (MCN), cystadenocarcinomas, and intraductal papillary mucinous neoplasm (IPMN). According to guidelines from the American Society of Gastrointestinal Endoscopy, cystic lesions of the pancreas, even when found incidentally, may represent malignant or premalignant neoplasms and require diagnostic evaluation regardless of size. Pancreatic pseudocysts and congenital cysts have no malignant potential. Serous cystadenomas represent nearly 30% of pancreatic cystic neoplasms and are most commonly found in women older than 70 years. After a lesion is identified on imaging studies, EUS with FNA is used to obtain fluid for cytologic evaluation and for tumor markers and amylase in order to classify pancreatic cystic lesions (see Table 86-9 and Figure 86-4). Given the high malignant potential, mucinous cystic neoplasms and main duct IPMNs are managed with surgical resection in geriatric patients who are healthy enough to tolerate surgery. Branch duct IPMNs may be surgically resected as well, though serial imaging is often appropriate based on current guidelines.
TABLE 86-9 ■ IMAGING FEATURES OF PANCREATIC CYSTS AND CYST FLUID ANALYSIS
FIGURE 86-4. Endoscopic ultrasound with fine needle aspiration in pancreatic cyst. This cyst proved to be a serous cystadenoma. (Reproduced with permission from Deepak Gopal, MD, University of Wisconsin, Madison, WI.)
FURTHER READING
Agarwal PD, Phillips P, Hillman L, et al. Multidisciplinary management of hepatocellular carcinoma improves access to therapy and patient survival. J Clin Gastroenterol. 2017;51(9):845–859.
ASGE Standards of Practice Committee, Buxbaum JL, Abbas Fehmi SM, et al. ASGE guideline on the role of endoscopy in the evaluation and management of choledocholithiasis. Gastrointest Endosc. 2019;89(6): 1075–1105 e15.
Blazer DG, Wu LT. The epidemiology of at-risk and binge drinking among middle-aged and elderly community adults: National Survey on Drug Use and Health. Am J Psychiatry. 2009;166(10):1162–1169.
Caputo F, Vignoli T, Leggio L, Addolorato G, Zoli G, Bernardi M. Alcohol use disorders in the elderly: a brief overview from epidemiology to treatment options. Exp Gerontol. 2012;47(6):411–416.
Chalasani N, Younossi Z, Lavine JE, et al. The diagnosis and management of nonalcoholic fatty liver disease: practice guidance from the American Association for the Study of Liver Diseases. Hepatology.
2018;67(1):328–357.
Crabb DW, Im GY, Szabo G, Mellinger JL, Lucey MR. Diagnosis and treatment of alcohol-associated liver diseases: 2019 Practice Guidance From the American Association for the Study of Liver Diseases.
Hepatology. 2020;71(1):306–333.
Crockett SD, Wani S, Gardner TB, Falck-Ytter Y, Barkun AN, American Gastroenterological Association Institute Clinical Guidelines Committee. American Gastroenterological Association Institute Guideline on Initial Management of Acute Pancreatitis. Gastroenterol. 2018;154(4):1096– 1101.
Gardner TB, Adler DG, Forsmark CE, Sauer BG, Taylor JR, Whitcomb DC. ACG Clinical Guideline: chronic pancreatitis. Am J Gastroenterol.
2020;115(3):322–339.
Ghany MG, Morgan TR, AASLD-IDSA Hepatitis C Guidance Panel.
Hepatitis C Guidance 2019 Update: American Association for the Study of Liver Diseases-Infectious Diseases Society of America recommendations for testing, managing, and treating hepatitis C virus infection. Hepatology. 2020;71(2):686–721.
Lindor KD, Bowlus CL, Boyer J, Levy C, Mayo M. Primary biliary cholangitis: 2018 Practice Guidance from the American Association for the Study of Liver Diseases. Hepatology. 2019;69(1):394–419.
Mack CL, Adams D, Assis DN, et al. Diagnosis and management of autoimmune hepatitis in adults and children: 2019 Practice Guidance and
Guidelines From the American Association for the Study of Liver Diseases. Hepatology. 2020;72(2):671–722.
Mathus-Vliegen EM. Obesity and the elderly. J Clin Gastroenterol.
2012;46(7):533–544.
Noureddin M, Yates KP, Vaughn IA, et al. Clinical and histological determinants of nonalcoholic steatohepatitis and advanced fibrosis in elderly patients. Hepatology. 2013;58(5):1644–1654.
Tandon P, Montano-Loza AJ, Lai JC, Dasarathy S, Merli M. Sarcopenia and frailty in decompensated cirrhosis. J Hepatol. 2021;75 (Suppl 1):S147– S162.
Tandon P, Walling A, Patton H, Taddei T. AGA clinical practice update on palliative care management in cirrhosis: expert review. Clin
Gastroenterol Hepatol. 2021;19(4): 646–656 e3.
Terrault NA, Lok ASF, McMahon BJ, et al. Update on prevention, diagnosis, and treatment of chronic hepatitis B: AASLD 2018 Hepatitis B Guidance. Clin Liver Dis (Hoboken). 2018;12(1):33–34.
Vege SS, Ziring B, Jain R, Moayyedi P, Clinical Guidelines Committee; American Gastroenterology Association. American gastroenterological association institute guideline on the diagnosis and management of asymptomatic neoplastic pancreatic cysts. Gastroenterol.
2015;148(4):819–822; quize12-3.
Zeeh J, Platt D. The aging liver: structural and functional changes and their consequences for drug treatment in old age. Gerontology.
2002;48(3):121–127.
Chapter
87
Constipation
Gerardo Calderon, Andres Acosta
INTRODUCTION
Constipation is a frequent health concern for older people in every health care setting. Primary care visits for constipation increase markedly in people older than 60 years, as does regular use of laxatives. Self-reported constipation in older people is associated with anxiety, depression, and poor health perception, while clinical constipation in vulnerable individuals may lead to complications such as fecal impaction, overflow incontinence, sigmoid volvulus, and urinary retention. Constipation is an expensive condition, with high costs ranging from laxative expenditure to nursing time. For instance, it is estimated that 80% of community nurses working with older people in the United Kingdom are managing constipation (particularly fecal impaction). An Australian study used in-depth, semi-structured interviews to explore older individuals’ experiences with constipation, and their findings largely summed up feelings and problems, no doubt, shared by many older people across the developed world:
They feel “not right” in themselves when they are constipated.
Physicians can have a dismissive attitude about constipation and do not consider the problem seriously.
Patients are keen to find a solution, but feel useful and empathic advice and information are generally unavailable.
At the same time, they have a strong imperative for self-management including use of over-the-counter laxatives.
There are some barriers to lifestyle approaches, for example, expense of fruit and vegetables, fear of urinary incontinence with increased fluid
intake, or reluctance to walk out alone.
One-quarter still need to do self-manual removal despite measures taken.
This chapter will describe the definition, epidemiology, risk factors,
clinical presentation, assessment, and treatment of constipation in older adults. Data sources were searched of the English language literature (1966– 2020), systematic review including the Cochrane database, reference lists from recent systematic reviews and book chapters, and expert committee reports, society guidelines, and expert opinion. Levels of evidence are as used by the US Preventive Services Task Force:
Learning Objectives
Define and identify the prevalence of constipation in the older adult.
Describe the pathophysiology of constipation in the older adult.
Understand how to diagnose and classify constipation in the older adult.
Explain the assessment and management of constipation in the older adult.
Key Clinical Points
Constipation is a common problem in the older adult.
Constipation is an expensive condition, with high costs ranging from laxative expenditure to nursing time.
Health care providers should routinely inquire about constipation symptoms in older people and be alert to the presence of clinical constipation in individuals unable to communicate.
In many older people with constipation symptoms, lifestyle advice (diet, fluids, exercise, toileting habits) will preempt the need for laxative therapy.
In higher-risk patients, a stepwise approach to prescribing laxatives, suppositories, or enemas should be used, with the goal of achieving comfortable and regular evacuation.
List complications of constipation in the older adult.
Rectal evacuation difficulties should be specifically addressed in
6. order to identify conditions that may require additional
interventions.
Good evidence, Level 1: consistent results from well-designed, well- conducted studies
Fair evidence, Level 2: results show benefit, but strength limited by number, quality, or consistency of studies
Poor evidence, Level 3: insufficient because of limited number, power, or quality of studies
DEFINITIONS
Definitions of constipation in older people in medical and nursing literature have been inconsistent. Studies of older people have tended to define constipation subjectively by self-report, according to specific bowel-related symptoms, or by daily laxative usage. Constipation is a syndrome of difficulty moving bowels, characterized as difficulty or infrequent passage of stool, hardness of stool, or a feeling of incomplete evacuation that may occur in isolation or secondary to another underlying disorder. However, the definition of constipation can be broader and referred to as any condition that changes the bowel functions such as reduced stool frequency, straining to defecate, hard stool, or inability to defecate. Patients and their physicians have vastly different perceptions on what constitutes constipation and a patient-centered approach generally takes them at their word. Use of the Bristol Stool Scale also puts the patient and their practitioner on the same page (Figure 87-1). However, the study of constipation in a more scientific manner requires specific criteria.
FIGURE 87-1. Bristol Stool Scale chart. (Reproduced with permission from Lewis SJ, Heaton KW. Stool form scale as a useful guide to intestinal transit time. Scand J Gastroenterol.
1997;32[9]:920–924.)
The Rome IV symptom criteria are useful in defining constipation in older people (Table 87-1). The Rome criteria overlap with constipation- predominant irritable bowel syndrome (IBS-C). Diagnostic criteria for IBS require recurrent abdominal pain at least one day per week in the last three months, associated with two or more of the following: abdominal pain related to defecation, change in frequency of stool toward infrequent, or change in form of stool toward harder stools. IBS-C subtype includes more
than one-fourth of the defecations with Bristol stool types 1 or 2 and less than one-fourth of defecations with Bristol stool types 6 or 7 (see Figure 87-1).
TABLE 87-1 ■ DEFINITIONS OF CONSTIPATION
PREVALENCE OF CONSTIPATION
Constipation prevalence is between 12% to 19% in North America and the prevalence increases with age. A systematic review reported that approximately 63 million people in North America meet the Rome II criteria for constipation with a disproportionate number being older than 65. Table 87-2 provides practice guidance screening and identifying risk factors based on evidence from epidemiological studies (prevalence, symptomatology, and risk factors) of constipation in older people.
TABLE 87-2 ■ PRACTICE GUIDANCE BASED ON EPIDEMIOLOGICAL EVIDENCE
סנו tipati 11 ympt m h uld b r utin ly a k d ab t1t in pati nt ag d 65+ in i f th high pr al n of th c ndi- tion in thi p pulatj n [2].
M n and nו ת in h ir ighth d cad and b y nd h uld be r gularly c1· n d � r n tipati 11 ympt n1 , a p1· al nc ith ad ancing ag [2].
P ri dic bj tiv a n tipati n in lder nur ing h m 1· id n h uld b in rp ra d in ס r utin נ1ur ing and m dical cai· [2]. Pati nt unabl to rep 1·t ympt n1
wing t c gniti 1· c mnוunicati n diffi ulti hould
b p ciall ta1·g t d [3]. uch an a 1nen h uld cur a נninimunו m (th 3-nרonthly incid nc ra
n n t tipati n i 7% jn nur ing h 1n 1· id nt ), and ptinוally m ].
Id ntifying Ri k Factסrs
The id ntification f ri k fact r fi r con tipati n in ld r p - pl i critical to f� cti ly m naging th c nditi n [2].
y matic id 11tificati n f multipl ri k f: c r in vuln 1·abl lder p pl i }ו c n tipati n h uld be in rp rat d int
g d p1·actic guidelin in all h alth car tting [ ].
Patien at irו rea ed ri k f corו tipation fr m r cogrוiz d
om rbiditie ( י Parldn on di ea e, diab t ) lוould be reg- ularly a d � r the condition [2].
As es ment
Iden ifying pecific bowel ympt nו iנ1 lde1· indi idual reporti11g c n tipation i ir11po1·tant t guid appropriat man ageנn 11t f thi common c mplaint [2].
Reduc d b l 111 e1n 11t freque11cy i not a en iti clinical i11di a r for con tipatioזו iנו ld 1· p ople [2]י though it i p - cific [3].
Difficulty with e acuatioזו and rectal outl t delay a1·e priנnary symptom in old 1· indi idual [2].
An obj ctiv a e ment hould b undertak נו in f·rail older
Self-Reported Constipation
One older community-based study of 3166 persons aged 65 and older asked the question, “Do you have recurrent constipation?” and found a prevalence of 26% in women and 16% in men; in the 84+ years age group, prevalence was 34% and 26%, respectively. Age was a strong independent risk factor for self-reported constipation. Other community studies support this relationship with age and show prevalence rates of up to 34% of women and 30% of men older than age 65. The preponderance of women over men reporting constipation tends to equalize after the age of 80 years. The incidence rate of new-onset constipation is 7% in nursing home residents with screening every 3 months.
Infrequent Bowel Movements
Two or fewer bowel movements per week are below normal range and tend to signify slow-transit constipation. Weekly frequency of bowel movements is not changed with age, in contrast to self-reporting of constipation. In community-based studies:
Only 1% to 7% of both younger and older community-dwelling individuals report two or fewer bowel movements a week.
This consistent bowel pattern across age groups persists even after statistical adjustment for the greater number of laxatives used by older people.
Among older people complaining of constipation, less than 10% report two or fewer weekly bowel movements, and more than 50% move their bowels daily.
Difficult Evacuation
Symptoms other than infrequent bowel movements drive self-reporting of constipation in older people. These symptoms are predominantly straining and passage of hard stools. Of older people reporting constipation in a US community study, 65% had persistent straining and 39% had passage of hard bowel movements. Difficult rectal evacuation is a primary cause of constipation in older people. Twenty-one percent of community-dwelling people aged 65 and older had rectal outlet delay (according to Rome II criteria), and many describe the need to self-evacuate. Among frailer individuals, difficult evacuation can lead to rectal impaction and fecal soiling.
Constipation in the Acute Care Setting
According to the updated 2019 report from the Bowel Interest Group, 71,430 people in England were admitted to hospital with constipation between 2017 and 2018, which is equivalent to 196 people a day. The cost of treating constipation was £162 million. Also, the prescription cost of laxatives in England during this same period of time was £91 million, without considering over-the-counter laxatives.
Constipation in the Long-Term Care Setting
Long-term care residents are at increased risk of developing complications of constipation (Table 87-3) that may precipitate acute hospital admissions. Physical frailty in older persons does increase the prevalence of infrequent bowel movements, with 17% of nursing home residents reporting two or fewer bowel movements a week. Among the total population of long-term care residents self-reporting constipation, 33% have two or fewer bowel movements a week. A Finnish study showed the prevalence of chronic constipation and/or rectal outlet delay to be 57% in women and 64% in men living in residential homes, and 79% and 81% respectively in the nursing home setting. A UK study found that 64% of nursing home residents taking laxatives still reported straining on more than one in four occasions. This and the fact that 50% to 74% of long-term care residents use daily laxatives suggest that rectal evacuation difficulties are not being well managed in this population.
TABLE 87-3 ■ COMPLICATIONS OF CONSTIPATION IN OLDER PEOPLE
PATHOPHYSIOLOGY
Physiologic studies suggest that changes in the lower bowel predisposing toward constipation in older people are not primarily age-related. This is compatible with the epidemiology showing that (1) bowel movement frequency does alter with aging, and (2) constipation symptoms are more prevalent in older people with comorbidities. Extrinsic causes such as reduced mobility, reduced fluid intake and dietary fiber, medical comorbidities, and related medications all impact colonic motility and transit and influence the pathophysiology of constipation.
Colonic Function
Colonic motility depends on the integrity of the central and autonomic nervous systems, gut wall innervation and receptors, circular smooth muscle, and gastrointestinal hormones. Propagating motor complexes in the colon are
stimulated by increased intraluminal pressure generated by bulky fecal content. Studies of total gut transit time (passage of radiopaque isotope from mouth to anus, normally less than 72 hours), colonic motor activity, and postprandial gastrocolic reflex show no differences between healthy older and younger people. Older people with chronic constipation do, however, tend to have a prolonged total gut transit time, ranging from 4 to 9 days.
Radiologic markers pass especially slowly through the left colon with striking delay in the recto-sigmoid, suggesting that total transit time is prolonged due to a decline in propulsive activity in the colon. This may be secondary to a reduction in colonic enteric neurons producing nitric oxide and acetylcholine. The prolongation in transit time is even greater in institutionalized or bedridden patients with constipation, with total gut transit time ranging from 6 to more than 14 days. Slow transit results in a cycle of worsening colonic dysfunction by reducing water content of stool (normally 75%) and shrinking fecal bulk, which then diminishes the intraluminal pressures, and hence the generation of propagating motor complexes and propulsive activity.
Intrinsic Mechanisms for Colonic Dysfunction in Older People With Constipation
Certain intrinsic mechanisms for altered colonic function in older persons with constipation have been postulated from physiologic studies (Table 87- 4). The colonic epithelia decrease the secretion of water and electrolytes with aging due to a decrease of numbers of crypts and nongoblet epithelial cells. Overall collagen deposition in the left side of the colon increases with aging, and this could alter colonic compliance and motility. Direct electrophysiologic measurement of colonic motor activity in older subjects has shown that the sigmoid motor response to intraluminal bisacodyl (a direct stimulant of the myenteric plexus) is diminished in patients who are constipated, implying a deficit in intrinsic innervation. Myenteric plexus dysfunction may partially account for impaired gut motility in older persons with constipation. The total number of neurons in the myenteric plexus decreases with increasing age, and this neuronal loss bears no relation to the presence of pseudomelanosis coli cells, implying that use of anthraquinone laxatives is not the primary cause. Interestingly, these aging effects on the colon are not present in caloric restricted mice.
TABLE 87-4 ■ PATHOPHYSIOLOGICAL MECHANISMS FOR CONSTIPATION IN OLDER PEOPLE
Another possible intrinsic factor is age-related deficit in the density of inhibitory nerves, or in the binding sites on smooth muscle for inhibitory gut neuropeptides. In vitro studies of colons across age groups showed an age- related reduction in the amplitude of inhibitory junction potentials, but no decrease in the levels of inhibitory gut neuropeptides. This age-related decline occurs earlier in women as compared with men. Such a decrease in inhibitory nerve input to the circular smooth muscle could result in segmental motor incoordination, which may lengthen transit time and promote constipation in older persons with other predisposing risk factors.
Individuals older than age 60 have higher plasma concentrations of beta- endorphin with increased binding to opiate receptors in the gut wall and
myenteric plexus. Higher opiate binding has the effect of relaxing colonic tone, reducing motility, and inhibiting the gastrocolic reflex. Constipation is more prevalent in patients with nonulcer dyspepsia, as both conditions involve gastrointestinal hypomotility.
Anorectal Function
In normal defecation, colonic activity propels stool into the rectal ampulla causing distension and intrinsically mediated relaxation of the smooth muscle of the internal anal sphincter (or anal canal). This is followed promptly by reflex contraction of the external anal sphincter and pelvic floor muscles, which are skeletal muscles innervated by the pudendal nerve. The brain registers a desire to defecate, the external sphincter is voluntarily relaxed, and the rectum is evacuated with assistance from abdominal wall muscle contraction.
Age-Related Changes in Anorectal Function
There is a tendency toward an age-related decline in internal sphincter tone, particularly in the eighth decade onward. Clinically, this predisposes older individuals to fecal incontinence, particularly with loose stools. There is a more definite age-related decline (greater in women than men) in external anal sphincter and pelvic muscle strength, which can contribute toward evacuation difficulties. Failure of the anorectal angle to open and excessive perineal descent in older women can lead to constipation. In simulated defecation studies, 37% of non-constipated older subjects were unable to evacuate a small solid sphere. Consequent prolonged straining may compress the pudendal nerve, further exacerbating any preexisting weakness. There appears to be a reduction in rectal motility with normal aging, again in oldest age groups. Rectal sensation does not alter with normal aging.
Anorectal Dysfunction in Older Persons
The most common form of anorectal dysfunction in older people is rectal dysmotility, characterized by reduced rectal motility, increased rectal
compliance with a variable degree of rectal dilatation, and impaired rectal sensation such that the urge to pass stool is blunted (see Table 87-4). Over time, an increasing degree of rectal distension is required to reflexly trigger the defecation mechanism. These patients have rectal retention of hard or soft stool on digital examination of which they may be unaware. The resulting
rectal distension leads to relaxation of the internal sphincter and hence to fecal soiling. One study showed that rectal contractions could be elicited in only 14% of older people with a history of rectal impaction. One postulated cause for rectal dysmotility is diminished parasympathetic outflow as a result of impaired sacral cord function, for example, from ischemia or spinal stenosis. Rectal dysmotility can also develop through a persistent disregard or suppression of the urge to defecate that can occur with dementia, depression, immobility, or painful anorectal conditions. Voluntary increase in intra-abdominal pressure during defecation could overcome rectal dysmotility to produce enough of an increase in rectal pressure for evacuation to occur, but older people often have weakened abdominal musculature, limiting their ability to compensate in this way.
Pelvic floor dyssynergia, though more common in younger women, can cause rectal outlet delay in older people (Figure 87-2). This is caused by paradoxical contraction or failure to relax the pelvic floor and external anal sphincter muscles during defecation. Manometric studies show paradoxical increases in anal canal pressure on straining. This abnormal expulsion pattern occurs in individuals with severe and long-standing symptoms of rectal outlet delay and in patients with Parkinson disease.
FIGURE 87-2. A series of schematic diagrams that reveal the normal anatomy and physiology of the pelvic floor in the sagittal plane at rest, during defecation, and the key pathophysiologic changes in subjects with fecal incontinence and dyssynergic defecation. EAS, external anal sphincter; IAS, internal anal sphincter. (Reproduced with permission from Rao SS. Advances in
diagnostic assessment of fecal incontinence and dyssynergic defecation. Clin Gastroenterol Hepatol. 2010;8[11]:910–919.)
RISK FACTORS FOR CONSTIPATION IN OLDER PEOPLE
Both the epidemiology and pathophysiology of constipation in older people point to the enormous importance of identifying predisposing causes for the condition in each affected individual. One prospective study examined baseline characteristics predictive of new-onset constipation in older nursing home residents, using the US Minimum Data Set. Seven percent (n = 1291) developed constipation over a 3-month period. Independent predictors were White race, poor consumption of fluids, pneumonia, Parkinson disease, allergies, decreased bed mobility, arthritis, greater than five medications, dementia, hypothyroidism, and hypertension. The authors postulated that allergies, arthritis, and hypertension were associated primarily because of the constipating effect of drugs used to treat these conditions. Other studies have shown that institutionalization itself is an independent risk factor for symptom-based constipation in older people. Table 87-5 summarizes evidence-based risk factors of constipation in the older population.
TABLE 87-5 ■ RISK FACTORS FOR CONSTIPATION IN OLDER PEOPLE
Reduced Mobility
Impaired mobility is a common risk factor for constipation in older people. Greater physical activity (including regular walking) is associated with less self-reported and symptom-specific constipation in older people living both at home and in long-term care. Reduced mobility is the strongest independent correlate of heavy laxative use among nursing home residents, following adjustment for age, comorbidity, and other relevant clinical factors. Gut transit time in older subjects was 3 days in ambulant patients and 3 weeks in
bedridden patients, although comorbid factors were likely to be contributory. A study of healthy young male volunteers showed that after only 1 week of bed rest, both transit through the sigmoid colon and stool frequency were reduced. It is well documented that exercise increases colonic propulsive activity (“joggers diarrhea”), especially when measured after eating. In a population survey of younger women (36–61 years), daily physical activity was associated with less constipation (defined as two or fewer bowel movements per week), and the association strengthened with increased frequency of physical activity. This suggests that increasing physical activity in adulthood may reduce the likelihood of constipation problems in older age.
Drug Side Effects
Polypharmacy increases the risk of constipation in older patients, particularly in nursing homes where each individual takes an average of six to nine prescribed medications per day. Anticholinergic medications reduce contractility of the smooth muscle of the gut via an antimuscarinic effect at acetylcholine receptor sites, and in some cases (eg, patients with schizophrenia taking neuroleptics), long-term use may result in chronic megacolon. In two cross-sectional studies of nursing home residents, anticholinergic antidepressants were independently associated with daily laxative use following adjustment for age, gender, function, and cognition.
Anticholinergic neuroleptics and antihistamines were also independently associated in one of the studies; non-anticholinergic sedatives, however, were not found to be constipating. A study of 532 community-dwelling older US veterans found that among the 27% using anticholinergic drugs, the rate of constipation (42%) was significantly greater than among those not using the drugs.
While older people are very susceptible to the constipating effects of opiate analgesia, a study of nursing home residents with persistent nonmalignant pain found that there was no increased rate of constipation in chronic opiate users over a 6-month period compared to those not taking opiates. They also observed a general improvement in functional status and social engagement. Constipation in chronic opiate users can be effectively managed (by diet and laxative or suppository co-prescription where needed)
—an important finding as chronic pain is often undertreated in older people perhaps owing to fear of the adverse effects of analgesic drugs. Community-
based studies of adults receiving opiates for chronic pain have shown equal constipation risk for all sustained-release oral preparations. Transdermal patches (eg, fentanyl), however, are associated with lower risk of constipation than oral preparations.
All types of iron supplements (sulfate, fumarate, and gluconate) cause constipation, the constipating factor being the amount of elemental iron absorbed. Slow-release preparations have a lesser impact on the large bowel, but this is because they tend to carry the iron past the first part of the duodenum into an area of the gut where elemental iron absorption is poorer. Administration of iron sulfate in doses greater than 325 mg per day does not substantially increase iron absorption in older people and may significantly increase gastrointestinal side effects. Intravenous iron does not cause constipation and may be an alternative in patients with chronic anemia (eg, chronic kidney disease) who have symptomatic constipation on oral iron.
In a 5-year study of calcium supplementation in older women, the only side effect was constipation (treatment 13% vs placebo 9.1%). The study showed that calcium supplementation reduced bone loss and turnover and fracture rates in older women who took it, but long-term compliance was poor, and constipation may have contributed to this.
Calcium channel antagonists impair lower gut motility, particularly in the rectosigmoid, by inhibiting calcium uptake into smooth muscle cells and altering intraluminal electrolyte and water transportation. Severe constipation has been reported in older patients taking calcium channel antagonists, with nifedipine and verapamil being the most potent inhibitors of gut motility in this class of drugs.
Nonsteroidal anti-inflammatory drugs (NSAIDs) increase the risk of constipation in older people, most likely through prostaglandin inhibition. In a large case-controlled primary care study, constipation and straining were more common reasons for stopping NSAIDs than dyspepsia. NSAIDs have also been implicated in causing stercoral perforation in patients with chronic constipation.
Aluminum antacids have been associated with constipation in older people living in both nursing homes and in the community.
Dietary Factors
Fiber Low consumption of wheat bran, fiber, vegetables, fruit, rice, and calories can all predispose toward constipation. A UK survey showed that
consumption of fruit, vegetables, and bread decreases with advancing age. The prevalence of constipation is likely rising in part because modern food processing produces refined food with low roughage. Community studies of older Europeans who eat a Mediterranean diet rich in fruit, vegetables, and olive oil show a low prevalence of constipation (4.4% in people aged 50+). Conversely, a German questionnaire survey of adults with and without constipation reported that chocolate, white bread, and bananas were the foodstuffs most strongly perceived to harden stools.
Calories Low calorie intake in older people (adjusted for fiber intake) is associated with constipation. One study looked at nutritional factors across all nursing homes in Finland and found that malnutrition and constipation were associated. This may be a two-way association in that marked constipation or fecal impaction can cause anorexia, while low calorie intake can promote constipation.
Ente ral nutrition Constipation is a recognized problem in patients receiving enteral nutrition. A recent prospective multicenter longitudinal study from Spain followed adult patients (mean age 70 in males, 72 in females) receiving home enteral nutrition for a 4-month period and identified an IR of
1.9 and 1.1 in males and females, respectively. Another common complication seen in this study was diarrhea (IR 1.6 in males, 0.6 in females), although not as frequent as constipation. Diarrhea is usually associated due to the feed volume or its osmolarity.
Fluid Intake
Amount Low fluid intake in older adults has been associated with symptomatic constipation in epidemiologic surveys and to slow colonic transit. In patients with Parkinson disease, low water has been associated with severity of constipation. Withholding fluids over a 1-week period in young male volunteers significantly reduces stool output. Older people are at greater risk of dehydration and resulting constipation because of:
Impaired thirst sensation
Less effective hormonal responses to hypertonicity
Limited access to drinks because of coexisting physical or cognitive impairments
Voluntary fluid restriction in an attempt to control urinary incontinence
Alcohol and coffee A large Japanese survey of constipation symptoms found that alcohol consumption was a preventive factor in men. A population survey of middle-aged women in the United States showed that daily alcohol consumption (exceeding 12 g/d) and low-moderate caffeine intake were independently inversely related to infrequent bowel movements. Black coffee has been shown to increase colonic motility specifically in the rectosigmoid within 4 minutes of ingestion in young healthy volunteers (a reaction not observed with ingestion of hot water), implying that caffeine triggers the gastrocolic reflex.
Parkinson Disease
Patients with Parkinson disease suffer from three primary pathologies that lead to constipation:
Primary degeneration of dopaminergic neurons in the myenteric plexus resulting in prolonged colorectal transit
Pelvic dyssynergia causing rectal outlet delay and prolonged straining
Small increases in intra-abdominal pressures on straining (compared with age-matched controls)
Constipation can become prominent early in the course of the disease,
even 10 to 20 years prior to motor symptoms. In a 24-year longitudinal study in Honolulu, less than one bowel movement a day was associated with a threefold risk of future Parkinson disease in men. A study of patients at a Parkinson disease clinic found that 59% were constipated according to the Rome criteria (vs 21% in age-matched control group without neurologic disease), and 33% were very concerned by their bowel problem.
Antiparkinsonian drugs can further exacerbate constipation. Pelvic dyssynergia affects 60% of people with Parkinson disease and may be hard to treat. Botulinum toxin injected into the puborectalis muscle has been used to improve rectal emptying in Parkinson disease patients with good effect, though repeat injections every 3 months are required to maintain clinical benefit.
Dementia
Dementia predisposes individuals to rectal dysmotility, partly through ignoring the urge to defecate. A study in which young men deliberately suppressed defecation resulted in prolonged transit through the rectosigmoid
with a marked reduction in frequency of bowel movements. Epidemiological studies show a significant association between cognitive impairment and nurse-documented constipation in nursing home residents. Patients with non- Alzheimer dementias (Parkinson disease, Lewy body, vascular dementia) compared to those with Alzheimer dementia are more likely to suffer from autonomic symptoms, including constipation.
Mood-Related Disorders
Depression, psychological distress, and anxiety are all associated with increased self-reporting of constipation in older persons. In certain cases, the symptom of constipation is a somatic manifestation of psychiatric illness. A careful assessment is required to differentiate subjective complaints from clinical constipation in depressed or anxious patients.
Stroke
Constipation affects 60% of those recovering from stroke while undergoing rehabilitation, and a high number of these have combined rectal outlet delay and slow-transit constipation. For stroke survivors living in the community, difficulties accessing the toilet due to residual functional impairment can worsen problems with bowel evacuation. Weakness of abdominal and pelvic muscles following stroke also contributes to problems with evacuation.
Spinal Cord Injury/Disease
Constipation affects the majority of people with spinal cord disease or injury. Age and duration of injury interact to promote complications of chronic constipation such as acquired megacolon, which affects more than half of patients with spinal cord injury. Lumbar stenosis in older people caused by degenerative joint disease may lead to cauda equina problems with severe rectal outlet delay. One study in younger people showed that an average of 27 of rectosigmoid emptying was achieved with each defecation in patients with cauda equina syndromes, versus 81% in healthy controls.
Diabetes Mellitus
A Turkish study of outpatients with type 2 diabetes showed that 56% complained of constipation (vs 30% of controls). Neuropathy symptom scores are correlated with laxative usage and straining. Diabetic patients with autonomic neuropathy are more likely to be constipated because of
markedly slowed transit throughout the colon and impairment of the gastrocolic reflex. However, one-third of diabetic patients with constipation do not have neuropathic symptoms, so additional potentially reversible factors should be considered particularly in older people (eg, drugs, mobility, fluids). For example, a US community study found that constipation and/or laxative use was increased in type 1 versus type 2 diabetic men, but this difference was associated with use of calcium channel blockers rather than with neuropathy symptoms. Acute hyperglycemia inhibits the gastrocolic reflex and colonic peristalsis, so glycemic control may be an important factor in the genesis of constipation. Colonic transit time in immobile older people with diabetes is extremely prolonged at 200 ± 144 hours. An Israeli study showed that this very long transit time in long-term care residents with diabetes can be significantly reduced by administering acarbose, an alpha- glucosidase inhibitor with a potential adverse effect of causing diarrhea.
Overall, gut dysmotility can lead to bacterial overgrowth and the clinical problem of explosive diarrhea; treatment with erythromycin and long-term motility agents such as metoclopramide should be considered in these individuals. The risk of developing tardive dyskinesia, a known adverse reaction from metoclopramide, is increased in the older individual. In order to decrease the risk, older adults should be advised to avoid continuous treatment for longer than 12 weeks. A drug holidays of 2 weeks or a decrease of metoclopramide dose as tolerated, with tight blood glucose control, is encouraged whenever clinically possible.
Metabolic Disorders
Hypokalemia produces neuronal dysfunction that minimizes acetylcholine stimulation of gut smooth muscle and so prolongs transit through the gut.
Hypokalemia should be excluded in cases of colonic pseudo-obstruction and sigmoid volvulus. Hypercalcemia causes conduction delay within the extrinsic and intrinsic innervation of the gut. Surgical treatment of hyperparathyroidism reverses the neuromuscular bowel dysfunction seen with this condition. Patients with myxedema have been observed to have edema of the gut wall with mucopolysaccharide deposition, although whether this contributes to the colonic hypomotility seen commonly in clinical
hypothyroidism is uncertain. Patients on long-term renal dialysis have prolonged age-adjusted transit time, more so in hemodialysis than peritoneal dialysis. In a questionnaire study in Japan, 63% of hemodialysis patients
complained of constipation. Important contributors to this problem were high (49%), including use of resin to avoid hyperkalemia, suppression of the defecation urge while undergoing dialysis, and low fiber intake. Resin administration also places older inpatients at risk of fecal impaction.
Colorectal Cancer
Colorectal cancer has been associated with both constipation and use of laxatives, although this risk association is likely to be confounded by the influence of underlying habits. One study, adjusted for age and potential confounders, found that having fewer than three reported bowel movements a week was associated with a greater than twofold risk of colon cancer, with the association being most strong in Black women. As the prevalence of colorectal cancer increases with age, index of suspicion should be higher in older adults. Constipation alone, however, is not an indication for proceeding to colonoscopy (see below).
Rectocele
Posterior vaginal wall prolapse and rectocele are common in older multiparous women. These individuals have an increased risk of rectal outlet delay, particularly incomplete emptying and need for digital evacuation. This is presumably caused by mechanical obstruction, as this association is not seen in women with anterior pelvic prolapse.
COMPLICATIONS OF CONSTIPATION IN OLDER PEOPLE
Fecal Incontinence
Constipation is a common, treatable, preventable, and often overlooked cause of fecal incontinence in older people (see Table 87-4). Few medical symptoms are as distressing and social isolating for older people as fecal incontinence, a condition that places them at greater risk of morbidity, mortality, dependency, and nursing home placement. All too often, untreated overflow leads to hospitalization of vulnerable older patients. Many older individuals in the community with fecal incontinence will not volunteer the problem to their physician and, regrettably, physicians and nurses do not routinely inquire about the symptom. This “hidden problem” therefore leads to social isolation and a downward spiral of psychological distress,
dependency, and poor health. Even when health care professionals identify older people with fecal incontinence, it is often poorly assessed and passively managed, especially in the long-term care setting where it is most prevalent.
In one study, overflow (continuous fecal soiling and fecal impaction on rectal examination) was the underlying problem in 52% of frail nursing home residents with longstanding fecal incontinence. A therapeutic intervention consisting of enemas until no further response followed by lactulose achieved complete resolution of incontinence in 94% with full treatment adherence. Notably, this study showed that only 4% of nursing home residents with long-standing fecal incontinence had been referred for further assessment, reflecting a tendency toward unnecessarily conservative nursing management (eg, use of pads and undergarments only). Another nursing home study found that daily lactulose and suppositories plus weekly enemas only effectively resolved overflow incontinence when complete rectal emptying was consistently achieved over a period of 2 months. An effective therapeutic program for overflow incontinence depends on the following:
Regular toileting (ideally every 2 hours, which also promotes mobility)
Monitoring of treatment effect by rectal examination and bowel chart
Responsive stepwise drug and dosage changes
Prolonged treatment (at least 2 weeks)
Subsequent maintenance regimen to prevent recurrences
Fecal Impaction
Fecal impaction is an important cause of comorbidity in older patients, increasing the risk of hospitalization and of potentially fatal complications. A survey of patients admitted to acute geriatric units in the United Kingdom over 1 year reported that fecal impaction was a primary reason for hospitalization in 27%. In frail patients, fecal impaction may present as a nonspecific clinical deterioration; specific symptoms may include anorexia, vomiting, and abdominal pain. Findings on physical examination may include fever, delirium, abdominal distension, reduced bowel sounds, arrhythmias, and tachypnea secondary to splinting of the diaphragm. The mechanism for the fever and leukocytosis response is thought to be microscopic stercoral ulcerations of the colon. A plain abdominal radiograph will show colonic or rectal fecal retention associated with lower bowel dilatation. Presence of
fluid levels in the large or small bowel suggests advanced obstruction; the closer the fecal impaction is to the ileocecal valve, the greater the number of fluid levels seen in the small bowel. People with fecal impaction may also be predisposed to vasovagal reactions when the bowel is evacuated, which can cause complications such as syncope, vomiting, and aspiration.
Urinary Retention/Lower Urinary Tract Symptoms
Rectosigmoid fecal loading may impinge on the bladder neck causing some degree of urinary retention. Two Finnish studies of older women and men showed an independent association between constipation and lower urinary tract symptoms (LUTS) in both genders. A case-control study in hospitalized women aged 65 and older found that after adjustment for relevant confounders, constipation was the primary predictor of urinary retention, increasing the risk of retention fourfold as measured by portable ultrasound postvoid residual (PVR) > 100 mL (other predictors were urinary tract infection and previous urinary retention). Urinary symptoms of difficult voiding were unreliable in diagnosing retention in this study, suggesting that it is good practice to do screening PVRs in hospitalized older women with constipation, particularly in the context of coexisting urinary tract infection. A prospective cohort study examined the impact of treating chronic constipation on coexisting urinary symptoms in older people (mean age 72). After 4 months, there was a significant improvement in constipation, as well as urgency, frequency, and voiding difficulty, mean PVR (reduced from 85– 30 mL), and fewer urinary tract infections. There are also case reports in frail older people of bilateral hydronephrosis associated with renal failure that resolved following fecal disimpaction. Other urinary symptoms, including those of overactive bladder may be caused by stimulation of pelvic nerves by the distended rectum. Thus, bowel management is a key aspect of managing urinary incontinence in older people.
Stercoral Perforation
Fecal impaction increases the risk of stercoral perforation of the wall of the colon (usually sigmoid) secondary to ischemic necrosis. Stercoral perforation can also occur in chronically constipated persons where pressure from a hard fecaloma produces an ulcer with characteristically necrotic and inflammatory edges; these individuals tend to present with sudden onset of acute abdominal pain. Prompt surgical intervention and rigorous treatment of
peritonitis are needed to prevent the high mortality rate associated with this condition. A case-control study found that the most prevalent risk factor for colon ischemia in 700 cases was the use of drugs that cause constipation (one in three cases compared to only one in nine controls).
Sigmoid Volvulus
Chronic constipation in frail older people is the leading cause of sigmoid volvulus in the developed world. Volvulus is:
The third most common cause of large bowel obstruction in the United States
More likely in constipated patients with Parkinson disease and neuropathic colon (eg, from spinal cord disease or long-term neuroleptic treatment)
Associated with hypokalemia
Treated initially by sigmoidoscopic deflation but with a high recurrence rate
Managed surgically, usually by partial colectomy, when sigmoidoscopic deflation fails
Colonic Pseudo-Obstruction
Acute colonic pseudo-obstruction (Ogilvie syndrome) is most likely to occur in hospitalized frail older people with a history of chronic constipation who are acutely medically ill, or in postoperative phase. It presents with abdominal distension and colonic dilatation on x-ray, with a cecal diameter of 10 cm or more. Nothing by mouth, flatus tube, and correction of electrolyte imbalances (particularly potassium and magnesium) are initial treatments, progressing to neostigmine (if no cardiac contraindications), and then endoscopic decompression if dilatation persists. Administration of polyethylene glycol after initial resolution of colonic dilatation has been shown to reduce the likelihood of recurrence requiring escalation of therapy.
Rectal Prolapse
Prolonged straining at stool in constipated patients can result in rectal prolapse of varying degrees, and older people are more at risk from developing fecal soiling as a result. Surgery should be considered for full- thickness prolapses, and laparoscopic versus transabdominal repair is now
an effective treatment (including improving bowel-related symptoms) with a low recurrence rate.
Diverticular Disease
Left-sided diverticulosis coli affects 30% to 60% of people older than 60 in developed countries. The etiology has been attributed to high intraluminal pressures while straining at stool in people who have a low fiber diet. A case-control study of patients mean age 68 with acute uncomplicated diverticulitis showed 74% to have prolonged transit (longest in those with constipation symptoms), and 59% had small intestinal bacterial overgrowth. Newer approaches to preventing recurrence of symptomatic flare-ups of diverticular disease are use of mesalazine and Lactobacillus casei, separately or in combination.
Impact on Quality of Life
Functional bowel symptoms in older people can impair quality of life, even following adjustment for other chronic illnesses. Patients with constipation generally have an impaired quality of life compared with the general population, though few studies have looked at this specifically in older people. A Hong Kong study of community-living people aged 70 and older showed an independent association between constipation and low morale as measured by the Philadelphia Geriatric Morale Scale. A Canadian study of the general population found an association between constipation and a low SF-36 score, with the rate of physician visits for constipation being strongly associated with the physical component of the SF-36. The Patient Assessment of Constipation Quality of Life questionnaire is a validated tool for assessing quality of life over time in older adults in long-term care; scores correlate with abdominal pain and constipation severity. Patients whose constipation is associated with abdominal pain or other irritable bowel symptoms score even lower on quality of life measures, plus have poor general health perception. Constipation in long-term care residents unable to communicate because of dementia has been linked to physically aggressive behavior. A US study of almost 9000 nursing home residents examined characteristics associated with the development of wandering behavior over a 1-year period and found that constipation increased the risk almost twofold. The authors postulated that residents with dementia may
wander to alleviate constipation-related discomfort, and nursing home health professionals should be alert to this.
CLINICAL EVALUATION
General Assessment
Older patients with constipation should have an assessment focusing on predisposing causes. In addition to bowel symptoms, the history should include over-the-counter medications, diet, and fluid intake. Physical examination should include cognition, mood, and function in addition to abdominal and rectal examinations. Laboratory tests when indicated include complete blood count; plasma electrolytes; calcium; glucose; and liver and thyroid function tests. Figure 87-3 illustrates a recommended approach to the patient with chronic constipation.
FIGURE 87-3. A practical approach to assessment of constipation in older people. *Because anorectal manometry and rectal balloon expulsion test may not be available in all practice settings, it is acceptable in such circumstances to proceed to assessing colonic transit with the understanding that delayed colonic transit does not exclude a defecatory disorder. (Reproduced with permission from American Gastroenterological Association, Bharucha AE, Dorn SD, et al. American Gastroenterological Association medical position statement on constipation.
Gastroenterology. 2013;144[1]:211–217.)
History
Table 87-6 lists the important aspects of the bowel history in older people who complain of constipation. It is essential to identify rectal evacuation difficulties in order to manage the patient effectively. Although constipation may be underestimated in older patients with dementia and depression, studies have shown that adults complaining of constipation frequently
underestimate the number of bowel movements. Thus it may be helpful to have them keep a stool chart for a week to document frequency and characteristics of their bowel movements and associated symptoms. The Bristol Stool Form Scale, a validated tool to assess for stool consistency, may be used when the duration of constipation is not clear during history taking (see Figure 87-1). Bristol stool type 1 and 2 indicate hard or lumpy stool consistency respectively and may be a more reliable indicator of colonic transit than stool frequency. Patients may also complain of straining, use of manual maneuvers, and incontinence.
TABLE 87-6 ■ DIAGNOSIS OF CONSTIPATION IN OLDER PEOPLE
A recent history of altered bowel habit should prompt an exploration of precipitants (eg, new medications, changes in diet), and where unexplained, an evaluation for colorectal cancer. Family history of colorectal cancer should be obtained. Abdominal pain, rectal bleeding, and certainly any systemic features such as weight loss and anemia should prompt further investigations for underlying neoplasm.
Perianal fecal soiling is a common and embarrassing symptom that patients are reluctant to volunteer. In one large nursing home study, 38% of older individuals who complained of constipation reported fecal soiling of undergarments. Overflow fecal incontinence typically presents as frequent passive leakage of watery stool, sometimes confusing patients and caregivers who think they have “diarrhea” rather than constipation. Fecal impaction must be ruled out in the presence of fecal soiling or incontinence. The other important diagnoses to consider in older people are loose stools caused by inappropriate laxative use, other drug side effects, or undiagnosed bowel disease.
IBS should be a diagnosis of exclusion in older people and only made in those with a many-year history of intermittent symptoms such as abdominal distension or pain relieved by defecation, passage of mucus, and feeling of incomplete emptying (Rome criteria). Rectal pain associated with defecation should alert the physician to rectal ischemia as well as to other more common anorectal conditions. Rectal bleeding should prompt further evaluation for an underlying tumor, unless examination clearly reveals bright red blood from anal fissure or hemorrhoids. Lower urinary tract symptoms may be exacerbated by constipation and should be documented.
A person’s attitude toward their bowel problem (positive, acceptance, denial, distress, apathy) and the impact on their quality of life should be included when taking a history. Some health care providers share the generally held belief that constipation is an inevitable consequence of aging, and patients may feel that their problem is not taken seriously. A thorough clinical history and assessment is an important first step in developing a sound patient–physician partnership, which enhances successful outcomes in managing what is usually a chronic condition.
Digital Rectal Examination
Digital rectal examination is required in all patients who report constipation to reveal rectal impaction, rectal dilatation, hemorrhoids, anorectal disease, and perianal fecal soiling. Retained stool in rectal impaction does not have to be hard; loading with soft stool is common in older people taking laxatives who have problems with rectal outlet delay. Absence of stool on rectal examination does not exclude the diagnosis of constipation. A dilated rectum with diminished sensation and retained stool suggests rectal dysmotility.
External sphincter tone is assessed by asking the patient to “squeeze and pull
up” around the examining finger. Indicators of reduced internal anal tone are easy insertion of the finger into the anal canal and gaping of the anus on applying gentle traction to the anal margin. Anal sphincter weakness should prompt (1) careful prescribing to avoid causing fecal leakage though excessive laxative-induced softness of stool, and (2) instruction in exercises to strengthen the anal sphincter (Table 87-7). Absent cutaneous-anal reflex (gentle scratching of the anal margin should normally induce a visible contraction of the external sphincter) and, in particular, perianal anesthesia point to significant sacral cord dysfunction with associated rectal dysmotility. Proctoscopy is a simple, quick, and useful test for diagnosing internal hemorrhoids and abnormalities of the rectal wall.
TABLE 87-7 ■ PATIENT EDUCATIONA
'1til1et H,וb,tג �,גd lי'a.�1J,o�נiiיg
וכrn n,01 de!:ry חiג,Ving :ו bm.el mowmeזנt \'lיhen }llil f4!@I 1he·urge.
Pu1 asוde .i particulur tinנ:e etוe/1 day (1.י:�\<'Q1.u,d adv!w (ןl{�� breilkfut) wl1c1ך }'{11:1 ci1וגsוl oוו 1l1tJ tol:et Withalו 1ג<ב111g iנר lו11rry.
A ll!(s1.xt1d:ו:ttitirdE tcנ brm'!!I evaci1,;iזio11,11i1!espoci.illy help if )'"Qli h�·e prubtem,s; wiזliו s1rai11111g or a r11i!liווg o:r an.! lוl:ock:וgE-
lf �•tו:alוו 11g 1�a pr,Jlblem, Jt �11,elpful IC! lז: vc, נ foot:stח!נl �n>der y,;וur fe�ו whli.e�ttLI11g 011 וlוe roliel ;:ו� וlר s lnסrea� tlw abllity·()iyqur
ahdnmin.iד.1mu:1.:lt� 1(1 li\.גlןנ �v�ckaar'ir.in ..ןf�toו:וt A!Nl't.יו11i זגזMa.ss'!g.ז
l.le an tlגe b-ed wiLh plllo\יוs und� your IW1ld a11d slנoו:וlde-rs.
You.-knאs. ו(Jד,וl1i-� b�ril up Wiil\ 1.'ו pillו:ו\,, uחdtrזitaו!ן tlit/)'\ fni·�1וpjנQrדi,
Cover )'ז!ש a.bdoוווen willנ ב lרgltt shect.
tי1�11;gey;r:,uז ab&ו:וm.:נון \viזll fit111 bווr gciiוli!:1!i.�ו..וt;!ןr 11וmc,111�11t1;..ו�יrliנן(! aו th� ri:ghו �id�aדו.d,דוnrf.::il'!.S:. crl¼!\ m ו·iiו:111:1} s.i�. (;oflt.iniw 1� 11וassage fo-r בppI'(i)tihווl'IW[>י 10111i11,
Tlרוiasנni11'>sogt .s1וo11id be il pו�nnt cג:p<בric,תcc:--1frmi fucl .חy dimנmrort 111cבו .stop.
.Dw1
'Iu he!p pre�·enL ooנUtjp:,,ןLטנl )'011 s.00-u!d ��t mוכ:reי or1)1� roוכd� rom List A a.nd �. 01· llם� וכוכds rorn List B. rsטט�ill LiS\ A w11,d tu ח11i1.kc !lגc stססl SQficרr 11nd c,�[ו:r to pas1a,�au:re lhl.1' ilrc hi,gl, in fibי::i-. Foסd� in ן_ist B �cnd fo ד111וג,kc l!וc stiגסl hardcr, bו:ca�ו: thc-y bנnd.�h�r lbe mnt;:-דit-נ; oftlii� bQ\11e,!.
Lisl A: l�re.sl1 it, prmi.es ג11d <>lher dri d irז11jt, wh,.,.ן�11גוeaJ tי1 ad, bnn ere.ili ro·1.d porridt si\lad, 000 d eg:, t.גb!e:s{\י.'itl1 :וkiונ
wlwre l)Oi�ib!c), b<?ms, !ו!nti!�.
l.is;t R: tilk, h�n:1 chee��. }"חgurו, wlוlte ח�e:id or ו:rnckev,, retווהed ce,·�Iil.,aז.ke�, ננ,aחaוk!!!., 1גQ1גםlc;;,w, f11te n�.1:hocol��c-,
.;,;C-;\111td �nup.;;,
You 'lוouM ווג!Creaב.c ץour fibe..- ln!l'lkr: E,;r�du;:illץ bc�;.וu8c $udd�11 t.'lii!ו!gt' iוו IJbcr oo.rtte.n[ וn.וy ("ii!u l.c<Jגרp-ornזץ li]oal.iנו.&iintl
trregularit)'.!i is LmpO'rl:שt,to e-111.Lhe roods וו� �oתt:ד! יח lib-l!r all llכro1.1gh tl:t� d.:יז.)' :ו.nd not jus,t .:נ.\ one m :נוl Mtcננ :ו.s ' bmוkf�t.
lnmנai.tיי Ilד��11111tint c1ו·f1ווicl tlו�I yt111 drirוk gr:ו1l11:1lly up I.IJ 8- 0יg;1נ��t.�ג d:1}', Tי'Y tn 11 \מ�1ו111זיc \.ter,(roiו jווi �. :וJוd ן';ו,:.ו:}'{tזiווk�.
flוi1rdח ;Stn,rgil,mins
L�"lrnl1י1gW dמ :rou·r e;wn;lses
lt iוו. 1aa1nfur tQlגle p ifio11 ,�1111 your lסt sliglד.tly• l'lll'I. No,., iנוiag,i/i�11ו.וl }FOU זe trylוןg t(! rt<i\J ץtיuudfpis{J 11,gwind froווו tbe bom .ג- 1כ. thi )'ou וז� it �11e:cte tl1 m� 1�('Fl.'IOO(I tht oo k גw,mg: . "frץ�l\L�ing �od ILl\:i,,וg ו]mt 1u יt�e as" tי.slר.tly -י )'Ou נן. You islroul'd be able w fed.tlגr muscls וזכo-�. Your l:וu.twd(:;, abdorו1e1i. aiו.d iegזt�גז:וuJd n-ot ת,1uve ;ו! ,בll. You slוטuld bf!;ג\�othe
skו11 aו:ץכurווd.!hc- h.Id:,p=,ge tו!!hteו:וing 11nd be1.ng p11Ilcd up שהd BIY-ilf rד,mרו your ch11iו: Re.itUy try to ft:el i.his. Yau·11זc 1101 e;x.c«isjng
ץ011�aח ו spilin�r nהim!.cs. (Y�וו do not·nced to lוolו:Iymנז breotlנ wli11r1 }דml \lghtcn 11דc.m11sclם,�!}
Prnc'וכ�נngy,ויur ו.:1:rtiiש;:
Tightcn ;ו.חd p-uנו 11ק the .m ו sphincש וnשם1M AS זigi:ltly M·ץ1ב-u �.aזi. r-זoנd lנglוt�nפו:If.or 1l�-asl S. s-cainds, th�rו t�ו� mr
t lcQSt ](1 $CC.
Rfpc-M 1bit, ,�·'!."� lc:וst 5 tim�, ·111· will work.ו;)וו 1J:ו.c�Lrtt1glh ofjiינ11r mU$cle$.
1':'.eז;r, pו,נ.11�he וזוט�,וe� xo גpproxiתך.aוeנy h:גlf<כf nbוe1r m:i�mוtm sq1,1(WM!. See. bov..' i"ng ץmו (;.111 h"ld 1iוb: lhI.1'1נ1!1ת r�l:1x. (lr
at ]e,ן�t }•!}�(::c,
R1:�i'il l�iנ.רil 5 Um�. l'lו.iz,v; .•1lfW0rk 011 !lרrי:ו�[ll!lrilוו� ur W. iזוgpowei-ofycנur ווו��.
Pווll up l!וו:m11�dc.י;-,11q<u5l��}' aן�� 11ghlly ���u c:an :ן�d th.cת rclת:,;; a.nd ibe111P41)j ווp ap;J.rו. :nid see h, 11,• 11וa11y llmc.� ויטu.qוn d:o tlוi�
bcmre )'Ml gא ti�. Tl"y foז l lM�l �יqLוidt pLוll-LI�.1'ry this.�utm putl-up פ:uk�.כ .ר.'t le.רst 10 lln:\-fב �,1 dtז.ץ.
Dי:כ aע Llww eג:el"Cis�s--:נ:ShMd �s }'OU t:ש ;mו:I at!mi 5times ;נ &�ג•.Ais th mוmיk$ pi stroנ'!:ger. �u·willfind וlגat you <:aJI d.o mon
pu'IJ-upב.e;וdן tווnייב,.,,1th[111[ tlגe musc;!I!' �tti118 tir,oo.
11 mkt' 1.!נו11(:eQreג:(!rc' (!(: Q וז kc rnusclc roווls(!r, o rnoy lileoo t9 excrcוs-c r�ul.-rly lor ��יe al וiwת,th. bcl',;,rc ם1:ו.e1ז1w1;le ga\n
11ו•ir fu,11 1 tEJר.
l!!.ilזrr,iiיoו.ri /r»י U�1.וg "11pיpoנiroזf�
hl1$C n1il.y b� inscrt«I inוo yזi �ז ו�tum ('b.יt<:k 'Pייs.�ap}by yo11r l'llll'S� Qr �iliזegiver מr you�lfif YQll n.re plזf!;1.c�,!ly-.ד.bl�tc.d.o iו..
lf םe�es�uין gQ w th� lcוilיםt �nd�mptז }יט11r b<כwcl$if you am.
\VMh your hב11ds.
Bי�<,W._��•Jו.לil nrד;l,\'_prl,)j�_-rוg-fro,.ןן :h�t11.J)}J(ו&iד��-Y.
Eifhcr lוc סח yז,וui-�id�wit.h y<נui- !owci- kg,stmighf aiוd yo1JT uppו:יr Jcg bcnl tסך;v11rd }'D'll.r \'l'ה,�s[ or �quat.
Geוitlץ but flr11וly lrnserl וhe suppO;Sltory, n�rrow e11d flrst, h11to the·r«t11rn uslrng ג tin�r. Pיוגsliו t.JJ'e-n.ough (;;ןpproxlrn�וe1.ץ] Lnc:h)�
thגו lt,d:o� not-:ome oot ag;i.1.דר,
''nוו ווn,ף,י tlnd ynt1rhndyv.·i1nוi11g to pu�b 011t thc�Ltppo�ltory. C n�c יyaLt1' lqק 3ןןd kccp �ן;ןן fחi- a fe\v mlnuic-$.
Try nr,t·l(ב i."ור pt)' }'(1Liז lוiנ\�'c'I. (cנr it lt . t ו0-liו:1111iדו.
�CoוUllp,;גl,א11IAL\;r�..
l r.oזtmc'l <wwו./.dn!,l;i.c'am.coו.�u11f' 111i�lml
Pelvic Floor and Rectal Prolapse
Excessive perineal descent can be observed by asking the patient to “bear down” while lying in the lateral position. Normal perineal descent is less than 4 cm (can be eye-balled by drawing an imaginary line between that ischeal prominences). Rectal prolapse may also be observed in this manner, though lesser degrees of prolapse may only be identified by having the patient strain while sitting on a toilet or commode. An examination for posterior vaginal prolapse (bearing down in the gynecological position) is appropriate in all women with constipation, especially those reporting incomplete rectal emptying and the need to manually evacuate their rectum.
Plain Abdominal X-Ray
Clinical diagnosis can often be made by a thorough history and examination. However, a plain abdominal radiograph is useful in patients without rectal impaction in whom colonic loading is suspected because of a high-risk profile, constipation-related symptoms, or fecal incontinence. In those patients who continue to report troublesome constipation-related symptoms despite regular laxative use, it can guide management by showing the following:
No stool—patient may require education about what constitutes a normal bowel habit and no increase and possibly a reduction in laxative usage.
Colonic fecal loading—patient requires education on lifestyle measures and a change in type or increased dose of laxative.
Rectal loading with a clear colon—patient requires suppositories or enemas, and no increase and possibly a reduction in laxatives.
Rectal air with marked fecal loading in the descending colon may
correlate with normal-transit constipation or an evacuation disorder (Figure 87-4A). Marked fecal loading in the ascending and transverse colon correlates well with prolonged transit time, or slow-transit constipation, as does the presence of feces rather than air in the cecum (Figure 87-4B).
Dilatation of the colon (> 6.5 cm maximum diameter) in the absence of acute obstruction points toward megacolon (Figure 87-4C). Rectal dilatation (> 4 cm) implies dysmotility and evacuation problems. Finally, in patients with abdominal distension and/or pain, an abdominal radiograph is necessary to
rule out acute problems such as sigmoid volvulus and small bowel obstruction secondary to severe impaction.
FIGURE 87-4. Plain radiographs of patients with constipation. (A) Colonic or rectal fecal retention associated with air in rectum and cecum. (B) Large amount of fecal material in the entire colon with no evidence of bowel obstruction or no free intraperitoneal air. (C) Megacolon: Dilatation of the colon (> 6.5 cm maximum diameter) in the absence of acute obstruction.
Anorectal Function Tests
Anorectal function tests should be considered in assessment of constipation in older people if a rectal evacuation disorder is suspected (see Figure 87- 2). Anorectal manometry and balloon expulsion should be considered as initial assessment in patients who have not responded to fiber. They may be indicated in patients with severe and persistent rectal outlet delay, in order to diagnose pelvic dyssynergia, which is more effectively treated by biofeedback than laxatives. Another indication is fecal incontinence of formed stool that persists despite clearing of fecal impaction. Balloon expulsion is a simple procedure that evaluated the ability to defecate a
water-filled balloon. During the balloon expulsion, the time required to expel the balloon should be recorded, with normal range between 1 to 5 minutes.
Alternatively, if spontaneous evacuation is not possible, the other outcome is to measure the additional weight added to assist in expelling the balloon.
Anorectal tests (including endoanal ultrasound) can measure the integrity of the anal sphincters and thus guide management of incontinence toward conservative treatment (sphincter strengthening exercises and biofeedback therapy) or surgical intervention (sphincter reconstruction).
Defecography
There are several modalities to assess the defecatory movements and anatomy. Traditionally, barium and scintigraphy defecographies have been used and more recently, magnetic resonance defecography were developed to identify anatomic abnormalities with higher resolution, visualization, and without radiation. Defecography should be used when the anorectal manometry and balloon expulsion are inconclusive or when there is a high suspicion for an anatomic disorder. The most relevant findings in defecography are inadequate or excessive perineal descent during defecation, excessive straining, internal intussusception, solitary rectal ulcers, rectoceles, and rectal prolapse. Unfortunately, these tests are not widely available.
Colonic Transit
Colonic transit is the rate at which fecal residue moves through the colon. There are several approved methods to measure colonic transit. The most common and inexpensive measurement is using radiopaque markers (sitzmarks), which consist of the Hinton Technique. A capsule containing 24 radiopaque markers is swallowed and in 5 days, five or less markers should
remain in the colon on an abdominal radiograph. Radionucleotide gamma scintigraphy or wireless pH-pressure capsule are other well-validated methods but less commonly used. The benefit of scintigraphy is that results can be obtained in 48 hours when compared to radiopaque markers. Colonic transit studies are useful to differentiate patients with slow-transit constipation versus normal-transit constipation (Figure 87-5).
FIGURE 87-5. Examples of scintiscans at 6, 24, and 48 hours in patients with evacuation disorder and slow-transit constipation (STC); note the delayed transit is also demonstrated at 48 hours in the patients with STC and the retention of isotope in the left colon in patients with evacuation disorder. (Reproduced with permission from Nullens S, Nelsen T, Camilleri M, et al. Regional colon transit in patients with dys-synergic defaecation or slow transit in patients with constipation. Gut. 2012;61[8]:1132–1139.)
Colonoscopy
Chronic constipation alone is not an appropriate indication for colonoscopy; the range of neoplasia found is similar to that in asymptomatic patients undergoing primary colorectal cancer screening. A meta-analysis showed
that the presence of constipation as the primary indication for colonoscopy was associated with a significantly lower prevalence of colorectal cancer. Further investigation is warranted in the context of systemic illness or laboratory abnormalities. Barium enema is no longer used as the first line of testing. Colonoscopy causes significantly less discomfort than does a barium enema and is diagnostically more sensitive. A review of 400 colonoscopies in octogenarians and upwards showed a good safety profile but low cancer detection rate for symptoms (eg, constipation, abdominal pain) other than bleeding (2% vs 12%). Inadequate colonoscopies are common in older people because of poor bowel preparation. Older age, constipation, reported laxative use, tricyclic antidepressants, stroke, and dementia have been associated with inadequate preparation and thus taking longer to instrument the cecum. Clear instruction (such as booklets, visual aids, cell phone apps), split preparations instructions added by pharmacy to the product, pre- procedure phones calls, and availability of at least two alternative bowel preparation options are methods used to reduce poor bowel preparation and improve compliance in the older adults. There are no specific bowel preparation regimens for older patients recommended by the current guidelines. The most common ones in the general population include polyethylene glycol electrolyte lavage solution or sodium phosphate type laxatives. Magnesium citrate should be used cautiously in the older adults given age-related electrolyte derangement (increased levels of sodium, potassium, and magnesium). Older patients with cardiovascular disease may have increased risk of ischemic colitis when taking bisacodyl. Lastly, older patients with reduced renal function should avoid sodium phosphate as it has been associated with tubular toxicity from calcium phosphate.
NONPHARMACOLOGIC MANAGEMENT
Nonpharmacologic treatments for constipation are underused as first-line management in constipation that is not acute or severe, and as adjunctive treatment even when laxatives are deemed necessary. Symptoms of difficult evacuation may be particularly amenable to nonpharmacologic management such as stool softening and bulking through increasing fiber and fluid intake, pelvic muscle strengthening exercises, and footstool elevation of the legs during evacuation (see below). A systematic review examining nonpharmacologic treatment of chronic constipation in older people found no studies evaluating the effect of exercise therapy and only a few
nonrandomized trials examining fiber and fluid supplementation, and there has been little further research in this area since then. However, a study of older patients from two nursing homes in Ismailia, Egypt, reported that lifestyle modification (fluid intake included, with 87% taking > 1.5 L compared to 39% pre-intervention) improved symptoms and quality of life. Available data, expert opinion, and practical recommendations are summarized below.
Education
Educating patients as to what constitutes normal bowel habit should be one of the first steps in managing self-reported constipation. Patients with no or mild symptoms of constipation should be encouraged to discontinue chronic laxative therapy. Patients who require laxative treatment for constipation should be told to aim for regular, comfortable evacuation rather than daily evacuation, which is often their preconceived norm. Educational interventions promoting lifestyle changes for patients with chronic constipation should focus on exercise and diet. In order to persuade older people with constipation to change their lifestyle, they need to be convinced that:
Their current behaviors are “bad for their bowels.”
Bowel-related and general health improvements associated with recommended measures are worth the trouble and expense of changing.
It is they who are responsible for what they eat and how much exercise they engage in.
They have the skills and knowledge to modify their own lifestyle to improve their constipation, if they choose to do so.
It is important to provide people with clearly written educational
materials. A randomized controlled trial in stroke survivors with constipation evaluated the impact of a one-off nurse-led assessment (with feedback to primary care physician) and an educational session including provision of a booklet (Bowels and bladder | Heart and Stroke Foundation from https://www.heartandstroke.ca/stroke/recovery-and-support/physical- changes/bowels-and-bladder). At 6 months after intervention, subjects reported improved bowel function in terms of number and normality of bowel movements; at 1 year, they were more likely to be altering their diet
and fluid intake to control their bowel problem. Table 87-7 illustrates some of the patient-centered instructions from this study booklet, which are relevant to all older people with constipation.
Other trials have sought to influence fiber intake at a population level. Nutrition newsletters sent to older Americans in their homes significantly improved their dietary fiber intake. Another community intervention used media and social marketing in educational targeting of small retirement communities under the theme “Bread: It’s a Great Way to Go,” and reported a result of a 49% decrease in laxative sales and 58% increase in sales of whole meal and whole grain bread.
Educating caregivers on maintaining fecal continence in patients with dementia (with focus on constipation and other contributing factors) is crucial, and the same applies for patients with chronic neurologic diseases other than dementia, such as Parkinson disease, stroke, and neuropathic bowel (Constipation-Medical and physical issues-Caring for someone with dementia-Living with dementia-Alzheimer Europe [alzheimer-europe.org] from https://www.alzheimer-europe.org/Living-with-dementia/Caring-for- someone-with-dementia/Medical-and-physical- issues/Constipation#fragment1; Constipation and Faecal Impaction - Information Sheet IS41 [alzscot.org] from https://www.alzscot.org/sites/default/files/images/0000/0175/constipation- and-faecal-impaction.pdf).
DIET
Fiber increases stool frequency either by accelerating time or by increasing stool bulk. A systematic review of six randomized controlled trials (four with soluble fiber, and two with insoluble fiber) showed that soluble fiber led to improvements of global symptoms, straining, pain on defecation, and stool consistency. Also, there was an increase in stool frequency per week. Mixed fiber and psyllium were compared in a randomized controlled trial with both equally improving constipation and quality of life; mixed fiber was more effective in relieving flatulence, bloating and dissolved better. A meta- analysis of 20 nonrandomized studies in younger adults with constipation associated additional wheat bran with increased stool weight and decreased transit time. Evidence for the effectiveness of fiber in treatment of constipation in older people is more equivocal. In one community study,
higher fiber intake was associated with lower laxative use among older women, but in another study, higher intake of bran was associated with no reduction in constipation symptoms and greater fecal loading in the colon on abdominal radiography. In older hospitalized patients, daily bran supplementation increased weekly bowel movement frequency and improved overall symptoms as compared with placebo. There have been several “before and after” studies in nursing home residents reporting that addition of dietary fiber (ranging from bran to processed pea hull) or fruit mixtures (apple puree to fruit porridge) to the daily diet improved bowel movement frequency and consistency and reduced laxative intake and the need for nursing intervention. Bias cannot be excluded from these nursing home studies, including that of concomitant increased fluid intake contributing to these positive results. Despite these reservations, these observational studies emphasize the usefulness of increasing dietary fiber, fluid, and fruit in older people at high risk of constipation. Additional benefits may be observed; for instance, adding oat bran to the diet in one study reduced cholesterol levels more markedly in older versus younger women.
The recommended daily fiber intake is 20 to 35 g per day with patients starting at a low dose of 3 to 4 g daily and increasing gradually as tolerated. While coarse bran rather than refined fiber is more effective in increasing stool fluid weight, it is far less palatable, and is more likely to cause initial symptoms of increased bloating, flatulence, and irregular bowel movements. Fiber should therefore be recommended to older individuals in the form of foods such as whole meal or whole grain bread, porridge, fresh fruit (preferably unpeeled), seeded berries, raw or cooked vegetables, beans, and lentils. A crossover trial in subjects aged 60 and older that entailed taking a daily kiwi fruit resulted in bulkier and softer stools and increased bowel movement frequency. Fiber supplementation should also be culturally appropriate. Chinese food is typically low in fiber, and dietary additives such as konjac glucomannan can serve as “natural laxatives.” Other examples of natural laxatives are aloe vera and rhubarb, both of which contain stimulant anthraquinone derivatives similar to senna.
Fluids
A randomized controlled trial in adults aged 18 to 50 with chronic constipation showed that the beneficial effect of increased dietary fiber was significantly enhanced by increasing fluid intake to 1.5 to 2 L daily. This
level of fluid intake is often not practical or acceptable to frail older patients, many of whom restrict fluid intake because of overactive bladder symptoms. Upping fluid intake by two 8-ounce beverages a day for 5 weeks in dependent nursing home residents significantly increased bowel movement frequency and reduced laxative use. This “hydration program” used a colorful beverage cart and four beverage choices to stimulate residents’ interest in drinking. Caffeine is known to increase both bowel and bladder smooth muscle activity, though its impact on constipation in older people is not documented.
Physical Activity
A randomized controlled trial in middle-aged inactive patients with chronic constipation showed that regular physical activity (30-min brisk walk and 11-min home exercises a day) decreased colonic and rectosigmoid transit time and improved defecation. A systematic review and meta-analysis of eight studies involving aerobic exercise and one study involving anaerobic exercise indicated that exercise may be an effective treatment for constipation. A review of physical activity interventions in older adults concluded that incorporating exercise naturally into a person’s day tends to provide the most effective means for increasing activity levels. Studies conducted in older nursing home patients showed the following outcome:
Six months of moderate-intensity exercise training had no impact on constipation symptoms or habitual physical activity randomized controlled trial (RCT).
Six months of 2 hourly prompted toileting improved measures of daily physical activity and functional performance but did not alter bowel movement frequency (RCT).
Daily exercise in bed and the use of abdominal massage reduced laxative and enema use in chair-fast patients, although transit time was unaffected (non-RCT).
Existing evidence would tend to support exercise programs to influence
constipation in nursing home residents and older community-dwelling people within the context of addressing other risk factors also. For some frail older people, exercise is difficult, but even short time periods of walking or other mobility exercises may be feasible and helpful.
Abdominal Massage
Abdominal massage added to the standard bowel regimen in spinal cord patients has been shown to shorten colonic transit time and increase weekly bowel movement frequency. A vibrating device that applied kneading force to the abdomen once a day for 20 minutes was evaluated in older constipated nursing home residents, and after 12 weeks resulted in softening of stool, increased bowel movement frequency, and a 47% reduction in transit time.
Case reports show that physiotherapists can incorporate daily 10-minute abdominal massage into home activity programs for community-dwelling people suffering from constipation with good effect.
Biofeedback: Pelvic Floor Rehabilitation and Sphincter Strengthening Exercises
Biofeedback focuses on sensory and muscular retraining of the rectum and pelvic floor with the goals of improving sensation, muscular relaxation or strengthening, and improving the defecation dynamics (see Figure 87-2). Biofeedback is not well standardized and the best approach is unclear. A Cochrane review of 17 eligible studies with a total of 931 participants showed that because of the heterogeneity of the different samples and large range of different outcome measures, meta-analysis was not possible.
However, the larger studies are summarized here:
70% (21/30) of biofeedback patients had improved constipation compared to 23% (7/30) of diazepam patients at 3-month follow-up (relative risk [RR] 3.00, 95% CI 1.51–5.98).
In a 52-patient study, patients receiving manometry biofeedback had 4.6 complete spontaneous bowel movements (CSBM) per week compared to
2.8 CSBM for sham biofeedback or standard therapy consisting of diet, exercise, and laxatives at 3 months (mean difference 1.80, 95% CI 1.25–
2.35; 52 patients).
In a study of EMG biofeedback reported at both 6 and 12 months, 80% (43/54) of biofeedback patients reported clinical improvement compared to 22% (12/55) of laxative-treated patients (RR 3.65, 95% CI 2.17–
6.13).
The evidence recommended for these studies overall is low because the
majority of trials are of poor methodologic quality and subject to bias.
A randomized controlled trial showed that home-based biofeedback improved the number of CSBM per week and quality of life with similar efficacy to office-based biofeedback and was a cost-effective treatment.
Where rectal outlet delay and/or persistent straining is associated with excessive pelvic floor descent, pelvic strengthening exercises should be employed. In women, it is helpful to do the teaching while undertaking a pelvic examination (with the examining hand resting on the posterior vaginal wall), so that positive verbal feedback can be given when the patient correctly contracts the pelvic floor. Pelvic floor retraining can help rectal outlet symptoms, but a greater degree of perineal descent is predictive of poorer treatment responses. Patients with fecal soiling and/or weak external sphincter should be taught sphincter-strengthening exercises (see Table 87- 7).
Biofeedback and pelvic strengthening exercises are not a feasible option for older patients with significant cognitive impairment, as executive dysfunction can interfere with cooperating and learning the exercises, and memory impairment can interfere with practicing and using the exercises.
Visceral Manipulation
Visceral manipulation can be used in patients who have failed feedback. This approach, which is commonly used by osteopaths, consists of the mobilization of the bowels through gentle manipulation to normalize their mechanical, vascular, and neurologic dysfunction with subsequent function improve improvement. The benefit of this therapy in constipation may be related to the loss of resilience in structures surrounding the peritoneal bowels. A randomized, controlled, double-blind, clinical trial including 30 patients with a mean age of 66 years and recent history of stroke compared visceral manipulation versus standard physical therapy. Significant improvements in frequency of bowel movements, difficulty defecating, and sensation of incomplete bowel movement were seen in the visceral manipulation group.
Toileting Habits and Access
Small nonrandomized studies show that regular toileting habits (scheduled evacuation) restores comfortable evacuation in stroke survivors (with the assistance of digital stimulation) and in older postoperative inpatients. The preservation of the gastrocolic reflex with aging supports the rationale for
postprandial toileting. Hemorrhoids and other uncomfortable anorectal conditions that interfere with toileting should be identified and treated. Expert opinion supports the use of footstools during evacuation in individuals with weakened abdominal and pelvic muscles to optimize the Valsalva maneuver.
Toilet access should be assessed and facilitated, particularly in patients with mobility, visual, or dexterity impairments. Bathroom comfort and privacy must be considered, particularly for individuals in institutional settings. Reluctance to use the toilet in institutional settings has been linked to residents developing fecal impaction in case reports. Table 87-8 summarizes basic but important recommendations relating to toileting and maintaining privacy and dignity.
TABLE 87-8 ■ TOILETS AND TOILETING—MAINTAINING PRIVACY AND DIGNITY
PHARMACOLOGIC TREATMENT
Laxative and Enema Use and Abuse in Older People
A Food and Drug Administration (FDA) Advisory Panel has registered concern over the widespread overuse of over-the-counter (OTC) laxatives; laxatives are second only to analgesics as the most commonly used OTC medications by older people. OTC laxative use is common in the United States and Europe, and is encouraged by advertising and popular ignorance of adverse effects. Only 38% of OTC laxative users in Italy were guided in their choice of laxative by a physician. The remainder were influenced by pharmacists (21%), relatives or friends (16%), and advertisements (12%). Six percent of users reported adverse effects. One-fifth to one-third of regular laxative users do not consider themselves to be constipated, and many people take them through a misguided belief in the benefits of regular purgation. One study showed that 78% of older people who used laxatives regularly had never gone for more than 3 days without a bowel movement. Habitual rather than surreptitious abuse is more likely in older individuals; repeated purging empties the colon of stool that would normally descend into and distend the rectal ampulla, thereby removing the urge to defecate, and prompting the patient to take further laxatives.
Although patients in hospitals and nursing homes are at higher risk for constipation, this does not entirely justify the very high levels of cathartic prescribing in these settings. Seventy-six percent of hospitalized older patients are prescribed at least one type of laxative. A prospective study of 2355 nursing home residents in the Netherlands showed that over the course of 2 years, 47% were started on laxatives, with 79% of these continuing with the long-term treatment. Prescribing rates in US nursing homes are high at 54% to 74%, with almost half of these users prescribed more than one agent. Most commonly prescribed agents are stool softeners (26%), magnesium salts (18%), and stimulants (16%). Two contributing factors may lead to overprescribing of laxatives to older patients: lack of objective confirmation of the diagnosis by the prescribing physician or nurse, and prescribing patterns of laxatives that are clinically ineffective. In US nursing homes, docusate (a fecal softener with little or no laxative effect) is the predominantly prescribed agent. Docusate prescription should be discouraged, and other more effective methods of softening the stool should be prescribed.
Evidence-Based Summary of Laxative, Suppository, and Enema Treatment in Older Persons
Many reported trials of laxative and enema treatment in older people are low quality, limited by unclear definitions for constipation, inconsistent outcome measurement, and underreporting of potential confounding factors during the trial period (eg, fiber intake). The absence of good level evidence may in part underlie the somewhat empirical way in which laxatives are prescribed to older people. The following conclusions are drawn from meta-analytical reviews (1997, 2001, 2002, 2004, 2010, 2011, 2018) of efficacy of laxatives in treating chronic constipation in adults:
Availability of published evidence is poor for many commonly used agents including senna, magnesium hydroxide, bisacodyl, and stool softeners.
In trials conducted in older people, significant improvements in bowel movement frequency were observed with a stimulant laxative (cascara)
[3] and with lactulose [2], while psyllium [2] and lactulose [2] were individually reported to improve stool consistency and related symptoms in placebo-controlled trials.
Level [1] evidence supports the use of polyethylene glycol (PEG) in adults.
Level [2] evidence supports the use of lactulose and psyllium in adults.
None of the currently available trials include quality of life outcomes.
A systematic review of older adults (68–85 years) at long-care facilities concluded there were insufficient data to evaluate the safety and efficacy of laxatives. Senna in combination with bulking agents had greater efficacy than lactulose based on two trials. One trial showed that senna with psyllium was found to be more effective in adult ambulatory patients than psyllium alone.
A stepped approach to laxative treatment in older people is justified, starting with cheaper laxatives before proceeding to more expensive alternatives.
Tables 87-9 and 87-10 summarize the onset of action, mechanisms of
action, potential side effects, and benefits of selected laxatives in current usage, and the following discussion describes efficacy and safety.
TABLE 87-9 ■ MEDICATIONS USED IN OLDER PEOPLE WITH CONSTIPATION
TABLE 87-10 ■ NEWER MEDICATIONS FOR TREATMENT OF CONSTIPATION
Stimulant Laxatives
Senna is a cheap and safe agent for use in older patients. A trial of cascara (a similar plant-derived stimulant laxative) in older hospitalized patients increased bowel movement frequency by an average of 2.6 bowel movements per week as compared to placebo. Administration of 20 mg of senna daily for 6 months to patients older than 80 years did not cause any significant losses of intestinal protein or electrolytes, and repeated studies in mice show no evidence of myenteric nerve damage resulting from its use.
Senna generally induces evacuation 8 to 12 hours following administration, and should therefore be taken at bedtime. Some patients may require several weeks of daily use before achieving a regular bowel habit. Maintenance therapy with Senna is appropriate in patients with chronic constipation, and it can be used in higher doses for short-term treatment of fecal impaction. In patients with weak anal sphincters, the Senna alone may be sufficient to treat constipation without causing or exacerbating fecal incontinence through excess stool softening.
Bisacodyl is a useful alternative stimulant laxative to Senna. Bisacodyl 10 mg daily improved stool frequency and consistency without side effects in a randomized, controlled trial among outpatients with a mean age of 62.
Phenolphthalein and castor oil should not be used in older people because of a high risk of side effects including malabsorption, dehydration, lipoid pneumonia, and, with heavy prolonged use, cathartic colon.
Bulk Laxatives
Bulk laxatives are generally under-prescribed to older people, despite evidence that they increase bowel movement frequency (by a mean of 1.4 bowel movements per week as compared to placebo), and improve consistency and ease of evacuation. This may partly be because of intolerance in the form of bloating and unpredictable bowel habit in the first weeks of taking them, and also of caution on the part of prescribers because of the documented risk of impaction with these agents in patients with poor fluid intake. Psyllium has been shown to increase stool frequency in people with Parkinson disease, but did not alter transit time.
Bulking agents are generally useful in older individuals with mild to moderate constipation who are able to tolerate them and who drink sufficient fluids. They have the additional benefit of reducing abdominal pain in patients with IBS, limiting flare-ups of diverticulitis, and facilitating painful defecation associated with hemorrhoids. Furthermore, psyllium significantly lowers serum cholesterol by binding bile acids in the intestine. Available preparations are natural non-wheat fibers such as psyllium and ispaghula husk, and synthetic compounds such as calcium polycarbophil and methylcellulose. The synthetic compounds tend to be cheaper and are available in more easily administrated tablet forms, as compared to reconstituted powder, which can be hard for older patients to swallow. The synthetic bulking agents and the natural fibers are equally effective in increasing stool frequency and volume. Bran in tablet form is considerably cheaper than bulk laxatives, but the former may cause even more bloating and unpredictability of bowel habit and may also predispose to malabsorption of iron or calcium in older people.
Magnesium Salts
Magnesium salts are commonly prescribed to older hospitalized patients, and magnesium hydroxide is popular as an over-the-counter laxative. There is only one published study evaluating magnesium hydroxide in older people— a small trial in nursing home residents, which suggested that this laxative was more effective than a bulking agent in increasing bowel movement frequency
and softening stool. Magnesium salts may be favored by physician and patient alike because of their rapid action, but in general, a gradual catharsis is preferable to restore regular bowel habit in older persons. Their potent catharsis increases the risk of fluid and electrolyte losses and of fecal incontinence in less-mobile people, or in those with weak sphincters.
Furthermore, magnesium levels should be monitored in all older people who are using magnesium hydroxide on a regular basis, as hypermagnesemia can occur even with normal serum creatinine levels. Long-term use of magnesium hydroxide is contraindicated in chronic kidney disease. The more potent salt magnesium citrate carries an even greater risk of side effects, including promoting colonic pseudo-obstruction, and is therefore not recommended for use in the older population. Based on evidence and known side effects, there is no clear role for using magnesium salts in treatment of chronic constipation in older people.
Hyperosmolar Laxatives
Hyperosmolar laxatives are the most rigorously studied laxative group in the current literature. The following summarizes findings from nursing home studies:
Lactulose (or the related agent lactitol) versus placebo shortened transit time, increased bowel movement frequency (by an average of 1.9 bowel movements per week), and improved stool consistency.
In a comparison study with a bulking/stimulant combination agent, lactulose was a little less effective in influencing bowel pattern and consistency.
Lactulose and sorbitol were equally effective in mostly eliminating the use of other laxatives and enemas in residents with dementia and chronic constipation, with sorbitol being considerably cheaper.
A well-designed trial in ambulatory older veterans with severe
constipation also showed lactulose and sorbitol to be equally efficacious. Lactulose and sorbitol are effective agents in treating chronic constipation in older people in all health care settings, with sorbitol being the cheaper option.
Polyethylene glycol (PEG) is a more potent hyperosmolar laxative than lactulose as demonstrated by its impact on transit time in normal subjects. In
a randomized controlled trial in hospitalized patients with a mean age of 55, PEG produced a greater increase in bowel movement frequency and a greater reduction in straining than lactulose, but at the expense of a higher mean number of liquid stools. A similar efficacy and side-effect profile plus a reduction in laxative expenditures were shown with use in long-stay residents of a mental health institution. PEG treatment of fecal impaction (in combination with daily enemas) showed greater efficacy than lactulose, without the dehydration or hemodynamic side effects in older nursing home residents. Another trial in adults aged 17 to 88 with fecal loading on x-ray or rectal examination, and bowels not open for 3 to 5 days showed that 1 L (or 8 sachets) a day of PEG plus electrolytes for 3 days resolved impaction in 89% of patients, with few adverse effects. The current evidence base suggests that the role of PEG in older people is for acute disimpaction (ensuring that easy toilet access is guaranteed) and for regular use as a laxative only in high-risk people whose constipation has proved resistant to milder and cheaper alternatives.
Stool Softeners
Docusate sodium has been shown experimentally to have no effect on colonic motility, and little or no laxative action, even at doses of 300 mg/day. Current evidence suggests that docusate is not effective in the treatment of constipation or rectal outlet delay in older people. In a randomized controlled trial in adults with severe constipation docusate proved significantly inferior to psyllium for both softening stools and increasing bowel movement frequency. A systematic review of prospective controlled trials evaluating oral docusate in chronically ill people (though hampered by poor data) showed only a small trend toward increased stool frequency, and concluded that there was insufficient evidence to support its use in this population.
Nevertheless, docusate is frequently recommended and used in older people as a laxative as well as fecal softener. This is of particular concern in the nursing home and hospital settings, where constipation may as a result be undertreated with an increased risk of fecal impaction. Furthermore, docusate (in combination with the stimulant danthron) increases the risk of fecal incontinence in nursing home residents. As stated earlier, docusate prescription should be discouraged, and other more effective methods of softening the stool should be prescribed.
Enemas
Enemas have a role in both acute disimpaction and in preventing recurrent impactions in susceptible patients. They induce evacuation as a response to colonic distension, as well as by plain lavage; the commonest reason for a poor result from an enema is inadequate administration. In one study of nursing home residents with overflow incontinence associated with fecal impaction on rectal examination, daily phosphate enemas continued until no further results were effective in completely resolving incontinence in 94% of patients. Some patients with poor mobility or neurogenic bowel dysfunction may have recurrent stool impactions despite regular laxative and suppository use, and they will benefit from weekly enemas.
Tap water enemas are the safest type for regular use, although they take more nursing administration time than phosphate enemas, and are not available in certain countries. Regular use of phosphate enemas should be avoided in patients with renal impairment as dangerous hyperphosphatemia has been reported. Soapsuds enemas should never be administered to older patients. Arachis oil retention enemas are particularly useful in loosening colonic impactions. In patients who have a firm and large rectal impaction, manual evacuation should be performed before inserting enemas or suppositories, using local anesthetic gel if needed to reduce discomfort.
Suppositories
The predominance of rectal outlet delay (including manual evacuation) in older people, many of whom take regular laxatives, is likely associated with the underuse of suppositories. Although research data are lacking, clinically suppositories are very useful in treatment of rectal outlet delay, and when symptoms of prolonged straining are prominent. Regular suppository administration (usually three times a week, and ideally after breakfast) can effectively control symptoms in these patients. A study of nursing home patients with overflow incontinence found that a regimen of daily lactulose and suppositories plus weekly enemas was only effective in restoring continence when long-lasting and complete rectal emptying was achieved.
Regular use of suppositories is indicated in older patients with impaired rectal motility and/or recurrent rectal impactions. With appropriate education, many older people can self-administer suppositories; they are easier to insert and more effective if used blunt end first, and people with impaired dexterity may be able to use suppository inserters designed for
spinal cord injured patients. First-line suppository use is with glycerin, a hyperosmolar laxative used solely in suppository form. If ineffective, bisacodyl suppositories (in PEG base) should be used, although daily use can sometimes cause symptoms of rectal discomfort or burning. Bisacodyl
suppositories have been shown to be effective in treating severe constipation in patients with spinal cord injuries. The onset of action of suppositories varies by individual from 5 to 45 minutes (most likely influenced by the state of the rectal innervation); thus patients should be advised to set a quiet time aside for effective evacuation. Postprandial use of suppositories can also potentially take advantage of the gastrocolic reflex.
Secretagogues
Lubriprostone is approved in the United States for the treatment of chronic constipation. It is classified as a prostone, a bicyclic fatty acid compound derived from a metabolite of prostaglandin E1, which acts locally in the small intestinal mucosa, inducing secretion of fluid and electrolytes through the activation of the type-2 chloride channels in the intestinal apical cell membrane. A secondary analysis of one trial showed that 24 μg twice daily lubriprostone significantly improved the number of spontaneous bowel movements, consistency and straining rate compared to placebo in a 4-week trial among 57 patients age 65 and older, and was well tolerated with less side effects than placebo. In another secondary analysis of 163 older adults, lubriprostone showed significant improvement in constipation severity, abdominal bloating, and discomfort compared to placebo.
Linaclotide activates guanylate cyclase C on the intestinal mucosa, resulting in increased levels of intracellular and extracellular cyclic guanosine monophosphate, which results in increased luminal secretion of chloride and bicarbonate via the cystic fibrosis transmembrane conductance regulator. A meta-analysis of seven trials in patients with IBS or chronic constipation showed that linaclotide increases the number of complete spontaneous bowel movements per week and was associated with a 30% or more reduction from baseline in the weekly average of daily worst abdominal pain scores for 50% of the treatment weeks. Linaclotide also improved stool form and reduced abdominal pain, bloating, and overall symptom severity. Trials are lacking in the older population, and this drug, like lubriprostone, is expensive and not covered as a first-line agent by most insurance plans in the United States.
Plecanatide is another guanylate cyclase C agonist, similar to linaclotide. Multiple placebo-controlled trials showed the efficacy of plecanatide 3 and 6 mg when compared to placebo; plecanatide improved bowel symptoms (stool frequency, stool consistency, cramping, discomfort fullness) and had a major percentage of responders. Mean weekly CSBM frequency also increased from baseline. Most recently, an analysis including data from phase III trials in chronic idiopathic constipation and IBS-C with patients older than 65 years (451 patients of whom 287 were randomized to plecanatide) showed that plecanatide improved stool consistency from baseline at week 12 and was well tolerated.
Enterokinetic Agents
Altered serotonin (5-HT) signaling may predispose to chronic constipation, and 5-HT4 agonists (eg, prucalopride) have been shown to stimulate gastrointestinal motility and increase stool water content. The efficacy and safety of 5-HT4 agonists in treating chronic constipation were evaluated in a recent meta-analysis of 13 randomized controlled trials where 5-HT4 agonists were superior for all outcomes: mean ≥ 3 SCBM/week (RR = 1.85; 95% CI 1.23–2.79); mean ≥ 1 SCBM over baseline (RR = 1.57; 95% CI 1.19–2.06). Two of these studies were done in older populations and
prucalopride was superior to placebo. Although previous 5-HT4 agonists (cisapride and tegaserod) were associated with cardiac ischemia, strokes, cardiac arrhythmias, and QTc prolongation, prucalopride is not associated with major adverse cardiovascular events and has been approved in the United States. Tenapanor is a selective sodium/hydrogen exchanger isoform 3 (NHE3) inhibitor. It induces increased water secretion due to a decrease in sodium absorption in the intestines. A phase III, placebo-controlled clinical study in 629 patients with IBS-C reported an increase from in baseline in average weekly number of CSBM in the tenepanor group compared to placebo. Unfortunately, mean age was 45 years and data for outcomes in the older population are unknown.
Treatment Options for Opioid-Induced Constipation
Constipation is the most common gastrointestinal effect of opioids, with development in 41% to 94% of users. Activation of enteric μ-opioid receptors induces a decrease in bowel tone and contractility and an increase in colonic fluid absorption and anal sphincter tone leading to increased
difficulty to pass stool. No treatment guidelines in the older population are available, but it has been suggested to use nonpharmacologic interventions and laxatives as first-line therapy. In case of refractory constipation, peripherally acting μ-opioid receptor antagonists (PAMORA), such as naldemedine, naloxegol, alvimopan, and methylnaltrexone, are strongly recommended.
Four RCTs demonstrated that naldemedine increased the rate of patients having greater than 3 CSBM when compared to placebo (52% vs 35%; relative risk, 1.51; 95% CI, 1.32–1.72), with also statistically significant changes in straining, stool consistency, and quality of life. Naloxegol was associated with a higher rate of response to therapy measured by greater than or equal to 3 CSBM per week (42% vs 29%; relative risk, 1.43; 95% CI, 1.19–1.71). Regarding methylnaltrexone, the evidence of five RCTs demonstrated an improvement in bowel movement frequency compared with placebo but was considered low quality.
Treatment Option for Refractory Constipation
Transanal irrigation (TAI) consists of the instillation of water into the rectum using a rectal catheter or a cone that can be self-administered. A systematic review and meta-analysis identified seven studies (two uncontrolled, five retrospective) with a total of 254 patients with chronic constipation. A total of 128 patients reported a positive response to irrigation therapy, either subjectively or using a visual-analogue score. A fixed effect analysis of proportions gave a pooled response rate of 50.4% (95% CI: 44.3%–56.5%). Retrospective evaluation of 102 patients with refractory chronic idiopathic constipation treated with TAI reported improvement in general well-being (65%), rectal clearance (63%), bloating (49%), abdominal pain (48%), bowel frequency (42%), and SCBMs (22%). Unfortunately, these two studies included mostly patients less than 65 years; therefore, benefits in the older population are unknown.
Surgery is reserved for a small proportion of patients with medically refractory chronic constipation. Surgical interventions include colonic resection, rectal suspension, rectal wall excision, and rectovaginal reinforcement. Patients with slow-transit constipation reported a global satisfaction rate of 86% after colectomy. Surgical intervention in older patients should be considered carefully given the lack of studies assessing benefits and complication rates.
Novel Therapies
Bilateral transcutaneous tibial nerve stimulation (TENS) was studied in 44 patients aged 65 or older with chronic refractory constipation for 12 weeks. Inadequate defecation, obstructive defecation, colonic inertia, and pain improved at 6 weeks, with pain score continuing to decrease at 12 weeks.
Other therapies, such as sacral nerve stimulation and vibrating capsule, have not been tested in the older population.
TREATMENT GUIDANCE
Table 87-11 represents a combination of evidence-based and expert opinion in providing treatment guidance, which can be summarized as follows:
TABLE 87-11 ■ PHARMACOLOGIC TREATMENT OF CONSTIPATION IN OLDER PEOPLE—A STEPWISE APPROACH
In ambulatory older patients with chronic constipation, a daily bulk laxative is appropriate for both rectal outlet delay and slow-transit constipation. Patients must be able to drink adequate amounts of fluid to avoid further constipation and fecal impaction.
If the bulking agent is not tolerated, or proves ineffective, then senna may be substituted (1–3 tablets at night) with prn sorbitol (or lactulose) if needed to achieve patient-centered goals of comfortable regular evacuations.
In less-mobile older people at higher risk of impaction, a combination of regular senna and sorbitol (or lactulose) should be used with dosage titration.
In patients with colonic impaction, oil retention enemas should be administered daily until there are no clinical or radiologic signs of obstruction, and then tap water enemas continued regularly until they produce no further result.
Where the patient has easy access to a toilet, PEG 0.5 to 2 L daily should be given as long as is needed to clear the impaction, followed by a regular maintenance laxative regimen of senna and sorbitol (or lactulose) to avoid recurrence of fecal impaction.
In cases where toilet access is not easy (eg, at home with stairs), a more gradual clear-out using higher-dose senna and sorbitol or lactulose is appropriate to limit problems with incontinence.
For rectal outlet delay or a predominant complaint of straining, the first- line approach should be regular use of suppositories, and laxatives should only be given for coexisting symptoms of hard or infrequent bowel movements.
Interventions for constipation such as senna, osmotic laxatives, and suppositories should be supplemented by regular toileting after breakfast as the food and caffeine intake may help take advantage of the gastrocolic reflex.
CONCLUSIONS
A practical algorithmic approach to assessment and treatment of constipation in older people is illustrated in Figures 87-3 and 87-6. Health care professionals should routinely inquire about constipation symptoms in older people and be alert to the presence of clinical constipation in individuals unable to communicate. In many older people with constipation symptoms, lifestyle advice (diet, fluids, exercise, toileting habits) will preempt the need for laxative therapy. In higher-risk patients, a stepwise approach to prescribing laxatives, suppositories, or enemas should be used, with the goal of achieving comfortable and regular evacuation. Rectal evacuation difficulties should be specifically addressed in order to identify conditions that may require additional interventions.
FIGURE 87-6. Treatment algorithm for the management of chronic constipation in the older adults. (Reproduced with permission from Rao SS, Go JT. Update on the management of constipation in the elderly: new treatment options. Clin Interv Aging. 2010;5:163–171.)
FURTHER READING
Brenner DM, Fogel R, Dorn SD, et al. Efficacy, safety, and tolerability of plecanatide in patients with irritable bowel syndrome with constipation: results of two phase 3 randomized clinical trials. Am J Gastroenterol.
2018;113:735–745.
DeMicco M, Barrow L, Hickey B, et al. Randomized clinical trial: efficacy and safety of plecanatide in the treatment of chronic idiopathic
constipation. Therap Adv Gastroenterol. 2017;10:837–851.
Emmett CD, Close HJ, Yiannakou Y, et al. Trans-anal irrigation therapy to treat adult chronic functional constipation: systematic review and meta- analysis. BMC Gastroenterol. 2015;15:139.
Erdogan A, Rao SS, Thiruvaiyaru D, et al. Randomised clinical trial: mixed soluble/insoluble fibre vs. psyllium for chronic constipation. Aliment Pharmacol Ther. 2016;44:35–44.
Menees SB, Franklin H, Chey WD. Evaluation of plecanatide for the treatment of chronic idiopathic constipation and irritable bowel syndrome with constipation in patients 65 years or older. Clin Ther. 2020;42:1406–1414 e4.
Miner PB Jr, Koltun WD, Wiener GJ, et al. A randomized phase III clinical trial of plecanatide, a uroguanylin analog, in patients with chronic idiopathic constipation. Am J Gastroenterol. 2017;112:613–621.
Nullens S, Nelsen T, Camilleri M, et al. Regional colon transit in patients with dys-synergic defaecation or slow transit in patients with constipation. Gut. 2012;61:1132–1139.
Vijayvargiya P, Camilleri M. Use of prucalopride in adults with chronic idiopathic constipation. Expert Rev Clin Pharmacol. 2019;12:579–589.
Wanden-Berghe C, Patino-Alonso MC, Galindo-Villardon P, et al.
Complications associated with enteral nutrition: CAFANE Study.
Nutrients. 2019;11.
Chapter
88
Cancer and Aging: General Principles
Carolyn J. Presley, Harvey Jay Cohen, Mina S. Sedrak
INTRODUCTION
This chapter discusses many of the general relationships of cancer and aging. It focuses on the epidemiologic, basic etiologic, and biological relationships between the processes of aging and neoplasia and the generalizable aspects of management of cancer in the older patient. The approach to specific malignancies is covered in subsequent chapters related to the appropriate organ system. Cancer is a major problem for older adults. It is the second leading cause of death after heart disease in the United States and age is the single most important risk factor for developing cancer. Over 60% of all newly diagnosed malignant tumors and 70% of all cancer deaths occur in persons 65 years or older according to the National Cancer Institute (NCI). If one examines incidence and mortality data obtained from the NCI’s Surveillance, Epidemiology, and End Results (SEER) Program (Figure 88- 1), one sees that age-specific cancer incidence and mortality rise progressively throughout the age range. While the rate of incidence increase diminishes somewhat in the oldest age groups, and the rate actually falls slightly in the very oldest (perhaps a survivor effect), the overall risk for developing cancer is certainly greatest in the later years. Because the number of people in this country older than age 65 is rising rapidly and the oldest of the old, that is, those older than age 85, are increasing at the greatest rate, geriatricians, generalists, and internists will be encountering an increasing
Oncology
SECTION E
number of older adults with cancer in their practices and within the entire health care system.
FIGURE 88-1. Comparison of the percentage of total cancer incidence and mortality by age with age-specific incidence and mortality. (Data from SEER Cancer Statistics Review; 2013– 2017: National Center for Health Statistics; 2014–2018.)
Data from the SEER Program show that 5-year survivals for most types of cancer decrease with advancing age. Possible explanations include an altered natural history of some cancers, competing comorbid medical conditions, decreased physiologic reserve compromising the ability to tolerate therapy, physicians’ reluctance to provide aggressive therapy, and barriers in the older person’s access to care. Communication between health care providers and older patients may be hampered by deficits in hearing, vision, and cognition. The older cancer patient often has an older caregiver, and the diagnosis of cancer often affects the health-related quality of life of both individuals. Thus, not only does cancer occur at an increased rate in older individuals, but it makes a significant impact on such people’s lives, from the standpoint of both increasing morbidity and mortality. All of these challenges contribute to defining “geriatric oncology” as a true subspecialty and has led to the development of guidelines by the National Comprehensive
Cancer Network (NCCN) that addresses special considerations in older patients with cancer.
Learning Objectives
Identify risk factors inherent in the aging process that may promote neoplasia.
Apply an understanding of lifespan and patient-centered choice to cancer screening recommendations.
Integrate the concept of comprehensive geriatric assessment (CGA) and personal choice into cancer treatment decisions and individual management plans.
Become aware of age-related factors that influence the risk of toxicity from cancer treatment such as radiation, surgery, or systemic treatment (eg, chemotherapy, immunotherapy).
Key Clinical Points
Aging is associated with an increasing risk of developing cancer, with a number of mechanisms likely contributing including duration of exposure to carcinogens, susceptibility to DNA and other cellular damage with impaired repair mechanisms, a proneoplastic environment (eg, low-level inflammation), and impaired immune surveillance.
The 5-year survival rate for most cancers decreases with advanced age; many factors may contribute including altered tumor biology (ie, more aggressive tumor behavior), tolerance to therapy, and the presence of multiple comorbidities.
Age bias in diagnostic and treatment decisions for older adults with cancer exists. Older adults are continually underrepresented in clinical trials.
Although performance status (eg, Eastern Cooperative Oncology Group [ECOG] performance score) is routinely applied to patients with cancer, a CGA that includes assessment tools to predict functional age based on physical function, comorbidities that may interfere with cancer therapy, nutritional status, polypharmacy, psychological and cognitive status, socioeconomic
issues, and geriatric syndromes has been shown to add substantial prognostic information for older adults.
5. Patients and their caregivers should continually discuss goals of care throughout the course of their disease with the treating oncologist and primary care physician.
RELATIONSHIP BETWEEN AGING AND NEOPLASIA
The processes resulting in the evolution of neoplasia and those resulting in aging involve many of the same biological mechanisms, known respectively as the Hallmarks of Cancer and the Hallmarks of Aging (eg, genetic instability, proteostasis, stem cell function, metabolism, intercellular communication), but generally working in opposite directions. In the genesis of neoplasia, they result in accumulation of damage leading to increased cell proliferation, while in aging, they result in reduced cell proliferation or cellular senescence. Thus cancer and aging have often been referred to as opposite sides of the same coin. Both are multistage processes. Neoplasia proceeds through a series of changes known as initiation, or the first “hit”; promotion, where cell proliferation increases; and progression, itself a multistage step, resulting in the transformation of a cell from a premalignant to a malignant state. Local invasion and/or distant metastasis may then ensue but the mechanisms governing these processes are less well understood. As described elsewhere (see Chapter 1), aging also involves sequential accumulation of alterations in similar mechanisms but in what appears to be a less distinctly separated set of events.
Gene regulation is a particularly important aspect of neoplastic evolution. Broadly there are two types of genes involved; oncogenes, which when mutated transform cells from pre-malignant to malignant, and suppressor genes which generally prevent uncontrolled cell growth.
Oncogene activation or suppressor gene inactivation can result in malignant tumor formation. It is generally felt that multiple genetic alterations are required for the progression to malignancy, with each subsequent “hit” increasing the likelihood of malignancy. It has become apparent in recent years that while gene alterations are important in this process, changes in the environment in which these cells live plays a critical role in neoplastic
evolution. This may be of particular importance in the aging/cancer relationship as will be discussed further below.
There are several theories on how the aging process influences the process of neoplastic transformation to result in the markedly increased rates of cancer in older adults. These include the following:
Longer duration of carcinogenic exposure: It is possible that aging simply allows the time necessary for the accumulation of cellular events to develop into a clinical neoplasm. There is evidence for age-related accumulation and expression of genetic damage. Somatic mutations are believed to occur at the rate of approximately 1 in 10 cell divisions, with approximately 10 cell divisions occurring in a lifetime of a human being. Certainly, the complex set of events required in the multistep process of carcinogenesis, for example, as described for colon cancer in humans, does occur over time. The passage of time alone, however, is not likely to explain the entire cancer phenomenon. The time required for a mutated cell to become a malignant cell and then subsequently to become a detectable tumor has been estimated to be approximately 10% to 30% of the maximum lifespan for a given animal species, which may vary from just a few years to more than 100 years.
Altered susceptibility of aging cells to carcinogens: Data regarding carcinogen exposure are contradictory. In some cases, the incidence of skin tumors in mice produced with benzpyrene has been more related to dose than to age, whereas in other models, accelerated carcinogenesis as a function of age has been demonstrated, as, for example, when dimethylbenzanthracene (DMA) was applied to skin grafts of young and old mice. An age-related increase in the sensitivity of lymphocytes to cell-cycle arrest and chromosome damage after radiation has been demonstrated. It is also possible that there are alterations in carcinogen metabolism with age, but the findings from such studies have also been contradictory.
Decreased ability to repair DNA: It is possible that damage, once initiated, is more difficult to repair in older cells. A number of studies demonstrate decreased DNA repair as a function of age following damage by carcinogens as well as radiation. Such repair failures may also be reflected in increased karyotype abnormalities in aged normal cells as well as in older patients with neoplastic disease.
Oncogene activation or amplification or decrease in tumor-suppressor
gene activity: These processes might be increased in the older adult,
resulting either in increased action or promotion or in differential clonal evolution. Although evidence is currently limited, there have been observations of increased amplification of proto-oncogenes and their products in aging fibroblasts in vitro as well as evidence for increased c-myc transcript levels in the livers of aging mice. Alternatively, such factors as genetic alterations or DNA damage could lead to inactivation of cancer-suppressor genes. Given that age-related mutations frequently appear to result in the loss of function, alterations in tumor-suppressor genes may prove to be an important mechanism.
Telomere shortening and genetic instability: The function of telomeres and the enzyme telomerase appears to be intimately involved in both the senescence and neoplasia processes. Telomeres, the terminal end of all chromosomes, shorten progressively as cells age. This functional decline begins at age 30 and continues at a loss of approximately 1% per year. Because the major function of telomeres is to protect the stability of the more internal coding sequences, that is, allow cells to divide without losing genes; the loss of this function may lead to genetic instability, which may promote mutations in oncogenic or tumor- suppressor gene sequences. Without telomeres, chromosome ends could fuse together and degrade the cells’ genetic blueprint, making the cell malfunction, become cancerous, or even die.
Cellular senescence and the microenvironment: Older adults have been shown to accumulate senescent cells as shown by β-galactosidase staining and other methods. A number of factors in the tumor microenvironment are critical for the development of the malignant phenotype, especially invasion and metastasis. There is accumulating evidence showing that senescent cells can have deleterious effects on the tissue microenvironment. The most significant of these effects is the acquisition of a “senescence-associated secretory phenotype” (SASP) that turns senescent fibroblasts into proinflammatory cells (secreting factors such as inflammatory cytokines like IL-1 and IL-6) and epithelial growth factors (eg, heregulin and matrix metalloproteinases like MMP- 3), which can have the ability to promote tumor progression. A variety of stresses can provoke cellular senescence and include telomere dysfunction resulting from repeated cell division (termed replicative
senescence), mitochondrial deterioration, oxidative stress, severe or irreparable DNA damage and chromatin disruption (genotoxic stress), and the expression of certain oncogenes (oncogene-induced senescence). These stresses can be induced by external or internal chemical and physical insults encountered during the course of the lifespan, during therapeutic interventions (eg, radiation or chemotherapy), or as a consequence of endogenous processes such as oxidative respiration and mitogenic signals.
In early life, cellular senescence suppresses cancer by arresting cells at risk of malignant transformation. However, their ability to promote carcinogenesis in later life as described above suggests that the senescence response is antagonistically pleiotropic. Thus, a function important for the organism in early life (through the reproductive period) will be selected for, despite the fact that it may be quite injurious in later life. In this sense senescence could be viewed as the price we pay in later life for the rigorous attempt to control proliferation to avoid neoplasia early on. Recently drugs that destroy or inhibit senescent cells (senolytics) have been studied for their potential to delay aging and age-related diseases such as cancer.
Decreased immune surveillance: A decrease in immune surveillance, or immunosenescence, could contribute to the increased incidence of malignancies. In animal models there is a considerable amount of evidence for a loss of tumor-specific immunity with progressive age. This includes the altered capacity of old mice to reject transplanted tumors, the close relationship between susceptibility to malignant melanomas and the rate of age-related T-cell–dependent immune function decline, and the ability by immunopharmacologic manipulation to increase age-depressed tumoricidal immune function and to decrease the incidence of spontaneous tumors. The evidence linking such data to age-associated immune deficiency and the rise of cancer incidence in humans, however, is mainly circumstantial and not likely to be fully explanatory, as the types of tumors seen in the most striking examples of immune deficiency are very different from those seen in the usual aging human.
Figure 88-2 summarizes in a schematic fashion the potential interaction
of these many factors that may be of importance in the increase of cancer
with age. It indicates the interface of time- and age-related events, such as free radical and other carcinogenic exposure, resulting in initiation, then cumulative promoting events, including mutations and other alterations in critical genes, which ultimately exceed a threshold of host resistance factors, which have been progressively reduced during the aging process.
FIGURE 88-2. Age and cancer susceptibility. This figure presents a model incorporating the various factors that may play a role in the increased incidence of cancer with age.
CLINICAL PRESENTATIONS AND DISEASE BEHAVIOR
Screening in Asymptomatic Individuals
Older adults continue to be both underscreened and thus underdiagnosed with cancer, as well as overscreened and placed at increased risk of overtreatment. It is well known that mammography and Papanicolaou (Pap) tests are underutilized in certain older racial and ethnic minority groups, and in those who have less than high school education or live below the poverty level. On the other hand, there is general agreement that routine cancer screening has little likelihood to result in a net benefit for individuals with limited life expectancy. Despite this a number of studies have continued to show that screening is common in individuals with less than 5-year life expectancy, and even in many in nursing homes with severe disability who could likely not benefit from treatment. Cancer screening in such patients not only has implications for utilization of health care resources in a setting
unlikely to result in net benefit but may also cause net patient harm owing to subsequent diagnostic procedures and overtreatment. A general approach to screening that may help avoid these problems is discussed in Chapter 12, and approaches in specific tumor types are discussed in Chapters 38 and 89 to 93.
Initial Presentation
As an extension of the screening concept, the goal for initial cancer detection is to make the diagnosis as early as possible, with the hope that treatment at the earliest stages of disease would yield the best survival rates. It is, therefore, of great importance that both patient and physician pay attention to symptoms that may herald the onset of the neoplastic process. Current evidence suggests that once having noticed a symptom that appears to be related to cancer, most older individuals do not delay appreciably in seeking medical help. In one study from New Mexico of 800 patients over 65 years, only 29% of the subjects were asymptomatic when cancer was detected, and 48% presented within 2 months of symptom onset. However, 19% of the subjects delayed seeking care for at least 12 weeks and 7.4% delayed at least 1 year. Older women, who are at greater risk of developing breast cancer, are also more likely to delay their presentation. Physicians may also play a role in delaying further diagnostic pursuits in older patients. Part of the problem may lie in a failure to recognize some of the new signs and symptoms in patients with multiple disease processes. It is easy to attribute such symptoms as anorexia, weight loss, or decrease in performance status to social or psychological changes. The increasing prevalence of findings such as anemia in older patients may lower the index of suspicion for attributing the factor to a new specific neoplastic process. Hence, symptom awareness and appraisal by patients and doctor is an important determinant of timely presentation and investigation, although it must be balanced by judgment concerning the risk-benefit ratio for diagnostic evaluations in individual patients depending on their other medical status. For example, the initial discovery of a new symptom in a previously totally well, active 80-year-old may be pursued rather differently than a similar discovery in a severely demented, bedbound individual with severe congestive heart failure, diabetes, and pulmonary failure.
Biological Behavior of Tumors in the Older Host
The effect of the aging process on the clinical course of cancer, or to put it another way, whether cancer behaves differently in the older individual, is not clear cut. Although the SEER data noted previously suggested that in many cancers the 5-year survival rate is lower for older people, it is possible that this is related more to comorbid disease and other factors rather than simply to aging per se. On the other hand, there is a widespread belief that cancers may behave more indolently in older patients. These are important issues because they may to a considerable degree, affect decisions regarding treatment. Both clinical and experimental evidence support both sides of this issue, and it is likely that there is a spectrum of responses dependent on initial tumor types as well as individual host status. One indicator of the phenomenon is the extent of disease at presentation. For most cancers examined, there has been no consistent difference in the stage of disease or presentation for different age groups. For those that have been determined, the directions are not always the same. For malignant melanoma, older patients have been consistently found to have more advanced-stage local disease with deeper penetrating lesions at presentation. For breast cancer, some studies show a greater proportion of older patients with distant metastatic spread at presentation, whereas for lung cancer the opposite has been found, and older patients have been noted to present with localized disease in a greater proportion of cases. Uterine and cervical cancers have in some cases been noted to be later in the course of disease at presentation in older individuals. Of course, even these differences might be related to such phenomena as delay in the patient’s presenting for diagnosis, delay in pursuing the diagnosis, intensity of diagnostic endeavors, or a combination of these factors, or, on the other hand, a greater chance for a serendipitous finding because of more frequent visits to physicians.
Another biological factor that may influence neoplastic behavior in differently aged hosts is the histologic subtype of the tumor. Thus, while thyroid cancer overall appears to behave more aggressively in the older host, it is also true that a larger proportion of thyroid neoplasia in older patients is made up by anaplastic carcinoma, which at any age has more aggressive behavior. Similarly, for malignant melanoma, although there is an increased proportion of older people who have melanomas of poor prognostic histologic type and location at presentation, older individuals have a poorer prognosis for survival than do younger ones independent of this phenomenon even for localized disease. Such biological differences may be manifested in
other ways, as in the case of breast cancer, in which older women have an increased frequency of estrogen receptor–positive breast cancer, probably related to hormonal influences of the postmenopausal state. An additional factor is the longer tumor doubling time seen in breast cancer cells from older women. Because estrogen receptor (ER) positivity and longer doubling times are associated with better prognosis, with more slow-growing tumors, and with longer disease-free survivals, these phenomena, rather than age per se, might provide the reason why this cancer appears to behave more indolently in an older individual. Despite this, overall cancer-related survival is lower in older women, emphasizing the complex interactions of tumor and host that must be considered.
Acute myelogenous leukemia (AML) has a much poorer prognosis in older patients and the biology underlying the poor prognosis and poor treatment outcome of AML in the older adult has been extensively studied. Among the findings to date are a high incidence of poor-prognosis karyotypes (5q-, 7q-), high frequency of preceding myelodysplastic syndromes, and an increased expression of proteins involved in intrinsic resistance to chemotherapeutic agents. Compared to younger patients, leukemic blasts from older patients with AML had a lower propensity to apoptosis following traditional remission-induction treatment with ara-C and daunorubicin. Gene- expression studies have demonstrated that older AML patients tend to have worse survival compared to their younger counterparts which may be driven in part by an increased expression of ras, src, and tumor necrosis factor (TNF) pathways in the bone marrow of older patients.
Experimental data in animal models likewise show this spectrum in rate of tumor growth and progression as a function of age. In these studies, the ability to contain tumor growth depends on the particular host tumor system used, thus mimicking the clinical situation to some extent. A proposed explanation for the situation in which the older host more effectively controls the rate of tumor is a paradoxical effect of decreasing immune function with age, that is, decreased activity of those cells in the old host’s immune system, which, under the stimulation of the neoplastic process, produce tumor- enhancing factors such as angiogenesis factor. When this occurs, tumor growth might be expected to be diminished. To what extent these various factors play a role in the biological behavior of neoplasia in the human aging host remains a fascinating puzzle to be unraveled.
MANAGEMENT
General Principles
Balancing the benefits and risks of cancer treatment in older patients is challenging because of the dearth of high-quality, evidence-based studies. Older adults, especially those with age-associated vulnerabilities, remain vastly underrepresented in research that sets the standards for the safety and efficacy of cancer treatments. This lack of evidence puts this group at high risk of clinical uncertainty that leads to both under- and overtreatment.
To help inform treatment decisions, expert panels and guidelines have been developed regarding the practical assessment and management of older patients with cancer. In this section, we will review the general approach that treating physician should consider when caring for older patients with cancer. We discuss the Comprehensive Geriatric Model framework, which accounts for biological, psychological, social, and treatment factors involved in the patient’s well-being, and highlight the need to individualize treatment decisions. We also review use of biological and clinical markers in the evaluation of older adults with cancer and discuss the role of aging-directed interventions in improving treatment for this rapidly growing population.
The Comprehensive Geriatric Model
Physicians cannot rely on chronological age alone when making decisions related to the treatment of older patients with cancer. Aging is a heterogeneous process that is not captured by chronological age. There is substantial heterogeneity in the physiologic and functional characteristics of older persons. We proposed one framework in which the general aspects of treatment decision-making can be considered for the individual older patients with cancer. This is shown in Figure 88-3 and is called the Comprehensive Geriatric Model. It graphically presents a number of the concepts critical to the care of the older adult, that is, that there is a decreased functional reserve and that, as an extension of Engel’s Bio-Psycho-Social Model, all of these various aspects of the individual’s background must be taken into account when making decisions about the new process, that is, the cancer. Each of these levels, for example, biological or psychosocial, can create interactions that influence both the cancer and the host, and, likewise, any intervention directed at the cancer may influence both the cancer and each of these levels of the host’s function. Conversely, each of these levels of function, when compromised by the aging process or other comorbid diseases, may